Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Subscribe to RSS Feed
Featured topics
NetApp Announces Exciting Enhancements to the BlueXP Digital Wallet
We’re thrilled to share some exciting news with you. We’ve rolled out a series o ...read more
by
Cathi_Allen
in Tech ONTAP Blogs
|
2025-02-25
1367
Views
3
Kudos
0
Replies
latest post by
Cathi_Allen
|
2025-02-25
Google Cloud NetApp Volumes is a fully managed file storage service that reaches customers across all regions in Google Cloud though the Flex service ...read more
by
rarvind
in Tech ONTAP Blogs
|
2025-02-04
1064
Views
4
Kudos
0
Replies
latest post by
rarvind
|
2025-02-04
Blog Activity
We will show you some of the steps and links that are available that will provide information about the installation, and get you started with Manila – think of this as a Manila 101 🙂
... View more
By DeepakRaj
NetAppTech ONTAP Blogs
50m ago
10 Views
0
0
"This article was orignally published on July 16, 2014"
Manila in Atlanta: OpenStack Summit Recap
Welcome to the OpenStack @ NetApp blog – there’s a lot going on here at NetApp around OpenStack, so we thought starting a blog would be a great way to get the word out!
About two months ago, NetApp sent a large contingent of folks to the biannual OpenStack Summit in Atlanta – where developers, operators, and users converge – (yes, both suits and hoodies were there) – to talk about their experiences around OpenStack, learn more about what’s new in the ecosystem, and help design the next release of OpenStack!
While there was a lot of energy around what NetApp is doing in OpenStack, I was most excited to see the energy around Manila – (not the city in the Philippines – we were in Atlanta, after all) – the OpenStack File Share Service! Manila allows users of OpenStack clouds to provision and securely manage shared file systems through a simple REST API. Manila’s share network concept links the tenant-specific Neutron network and the storage system providing the file share together to ensure a secure, logically isolated connection. Manila has a modular driver architecture, similar to Cinder, that allow different, heterogeneous storage solutions to serve as provisioning backends for file shares.
There was a great 60-minute general session on Manila, which gave an overview of Manila, its API structure and key concepts, an architectural overview of the service, and then information on the growing number of drivers being integrated into the project. In the spirit of the community – we had seven presenters – representing NetApp, Red Hat, EMC, IBM, and Mirantis – all vendors who are active within the Manila project. Here’s a link to the recording in case you weren’t able to join us –=
Leveraging the fact that we had many of the key project leaders all together in the same city – and wanting to harness the great energy level from earlier in the day – NetApp sponsored a Manila design session different from all others held in conjunction with the summit. We all gathered for a great technical discussion at a local sports bar where we discussed the state of the project, the key design and delivery items for the Juno release, and enjoyed some great food and beverages!
Want to learn more about Manila? Get started by checking our wiki page @ http://wiki.openstack.org/wiki/Manila or jump on IRC – we’re always hanging out in #openstack-manila and #openstack-netapp on freenode! We’ve got weekly meetings at 1500 UTC in #openstack-meeting-alt on freenode as well.
... View more
By DeepakRaj
NetAppTech ONTAP Blogs
an hour ago
15 Views
0
0
"This article was orignally published on Oct 6, 2016"
NetApp would like to congratulate the OpenStack Community on the Newton release of OpenStack, which is available as of today.
This time around, there is a strong focus on making it easier to use containers with OpenStack, and this theme touched several of the OpenStack projects. Here is a summary of some of the new features found in the Newton release:
Improved Scalability
The Newton release significantly reduces architectural and functional barriers to scalability, including the ability to scale up or down across platforms and geographies. This further cements OpenStack’s dominance as a solution for clouds of all sizes. Enhancements include improved scale-up/scale-down capabilities in Nova, Horizon and Swift; progress with Cells V2; the addition of convergence by default in Heat; and multi-tenancy improvements in Ironic.
Enhanced Resiliency
Newton also is notable for its advancements in high availability, adaptability, and self-healing, thus giving operators further assurance of stability regardless of workload demands. Cinder, Ironic, Neutron and Trove are among the projects that deliver improved availability/high availability functionality. For example, Cinder adds adds support for retyping encrypted to unencrypted volumes and vice versa. Additional enhancements in Cinder include micro-version support, the ability to delete volumes with snapshots using the cascading feature, and backup service that can be scaled to multiple instances. Security improvements are included in Newton as well; for example, Keystone offers upgrades that include PCI compliance and encrypted credentials.
Expanded Versatility
The Newton release significantly advances OpenStack as the one cloud platform for virtualization, bare metal and containers. Magnum now offers provisioning for container orchestration tools, namely Swarm, Kubernetes and Mesos. Magnum’s new features include an operator-centric Install Guide, support for pluggable drivers, support for Kubernetes clusters via Ironic, and asynchronous cluster creation. For bare metal provisioning, Ironic adds multi-tenant networking and tighter integration with Magnum, Kubernetes and Nova; also, Kolla now supports deploying to Ironic. Kuryr brings Neutron networking capabilities to containers, making Swarm integration and Kubernetes integration available for the first time. Another Kuryr highlight is the capability to nest VMs through integration with Magnum and Neutron (early release).
As most of you know, NetApp is a big supporter of the Manila project (Fileshare-as-a-Service) and our own Ben Swartzlander is the Project-Team-Lead (PTL). Here he is talking about what’s new in the Newton Manila release:
Stay tuned to learn more, and if you have any questions, let us know in the comments below or on Discord channel.
... View more
By DeepakRaj
NetAppTech ONTAP Blogs
2 hours ago
15 Views
0
0
"This article was orignally published on Oct 14, 2016"
53% ! Yup, that’s the percentage of organizations that can tolerate less than an hour of downtime before significant revenue loss! 1 Here comes Cheesecake to the rescue! No, we’re not talking about the kind that you can eat and forget all your problems (sorry!). Cheesecake is the codename given to Cinder replication for Disaster Recovery (DR) use-cases by the OpenStack community. Here’s a link to the design specification : https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cheesecake.html
‘Wait, I thought I could already have replication with Cinder?!’ – Well, yes – while you did have the option to set up pool-level (NetApp FlexVol) replication with the NetApp driver for Cinder, Cheesecake enables you to implement a backend-level disaster recovery mechanism. Thus, instead of failing over on a per pool (FlexVol) basis, you can now failover on a backend* (SVM) basis which significantly reduces administrative complexity and service interruption!
* A cinder backend for cDOT is considered as a set of FlexVols on a given Vserver. These FlexVols are identified with the netapp option “netapp_pool_name_search_pattern”
Why Cheesecake?
Business environments have always desired and required 24/7 data availability. An enterprise’s storage must deliver the base building block for IT infrastructures, providing data storage for all business applications and objectives. Therefore, constant data availability begins with architecting storage systems that facilitate nondisruptive or least downtime operations. This functionality is desired in three principal areas: hardware resiliency, hardware and software lifecycle operations, and hardware and software maintenance operations.
Cheesecake provides a way for you to configure one or more disaster recovery partner storage systems to your cinder backend. So, if your cinder backend fails, you may, via a cinder API, flip a switch to continue your operations from one of the disaster recovery partner storage systems without losing access to your critical data for long periods of time.
How do I eat set it up?
We do realize that this section is a little long, but configuration is simple and straightforward – we promise!
Set up your NetApp backend to enable replication If you’re setting up a new NetApp backend for Cinder, configure your NetApp backend as required – you can access the complete guide on configuring NetApp backend with Cinder by going to our Deployment and Operations Guide. Now once that’s done (or if you already have a NetApp backend), go ahead and add these two new parameters to your backend stanza in the cinder.conf file to enable Cinder replication.
replication_device = backend_id:target_cmodeiSCSI This parameter allows you to set the backend that you want to use as your replication target using its backend IDHere, target_cmodeiSCSI denotes the name of another NetApp backend section that you may want to use as your replication target.Please note that while you can have this secondary / target backend added to the “ enabled_backend ” parameter in cinder.conf, we highly recommend NOT doing so. Setting up your target backend as an enabled backend in Cinder may cause the Cinder scheduler to place volumes in it – thus reducing the available space for your host replicas.
netapp_replication_aggregate_map = backend_id: target_cmodeiSCSI, source_aggr_1:destination_aggr_1 , source_aggr_2:destination_aggr_2 As the name suggests, this parameter allows you to create a source-to-destination aggregate map for your replicated FlexVols. It is recommended that you try to match the characteristics of the containing aggregates for all the FlexVols that make up your cinder backend on your target backend. Please note that the storage efficiency properties of the source FlexVol will be preserved in the target FlexVol.
NetApp does support one to many target relationships. Both the “replication_device” and “netapp_replication_aggregate_map” parameters are repeatable. So if you don’t want to rely on a single target and want to replicate to multiple locations, you can easily do so.Here’s an example:
replication_device = backend_id:target_cmodeiSCSI_1
Netapp_replication_aggregate_map = backend_id:target_cmodeiSCSI_1,src_aggr_1:dest_aggr_1,src_aggr_2:dest_aggr2
replication_device = backend_id:target_cmodeiSCSI_2
netapp_replication_aggregate_map = backend_id:target_cmodeiSCSI_2,src_aggr_A:dest_aggr_A,src_aggr_B:dest_aggrB
Example cinder.conf: Below is an example of what your cinder.conf file might look like with replication enabled. Please note that each replication target needs to have it’s own configuration stanza / section as part of the same cinder.conf. This is necessary as the driver addresses replication targets by their name, i.e, the replication_device’s backend_id parameter.
[DEFAULT]
...
enabled_backend = cmodeiSCSI
...
[cmodeiSCSI]
volume_backend_name = cmodeiSCSI
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname = 10.63.152.xxx
netapp_server_port = 80
netapp_vserver = xyzzy
netapp_login = admin
netapp_password = *******
netapp_storage_protocol = iscsi # or nfsNFS
netapp_storage_family = ontap_cluster
netapp_pool_name_search_pattern = .* # match all r/w FlexVols on the vServer
replication_device = backend_id:target_cmodeiSCSI
netapp_replication_aggregate_map = backend_id:target_ cmodeiSCSI,source_aggr_1:destination_aggr_1,source_aggr_2:destination_aggr_2,source_aggr_3:destination_aggr_3
[target_cmodeiSCSI]
netapp_server_hostname = 10.63.152.xxx
netapp_server_port = 80
netapp_vserver = spoon_1
netapp_login = admin
netapp_password = Netapp123
netapp_storage_protocol = iscsi
netapp_storage_family = ontap_cluster
netapp_pool_name_search_pattern = .*
…
Enable Vserver (SVM) peering You can read more about cluster and Vserver peering in our Express Guide.
Restart Cinder service
Ensure that you have everything set up properly You may use “ cinder service-list --withreplication ” to check the service list with replication related information: replication_status, active_backend_id, etc. The active_backend_id field for the cinder volume service should currently have no value. This field will be populated with the target backend name after a failover has been initiated.The driver will set up all the SnapMirror relationships while initializing, and performs a periodic check to ensure that SnapMirrors are healthy and updating. Any unexpected errors in this process are logged in the cinder-volume log file.
$ cinder service-list --withreplication
+------------------+-----------------------+------+---------+-------+---------------------
| Binary | Host | Zone | Status | State | Updated_at | Replication Status | Active Backend ID | Frozen | Disabled Reason |
+------------------+-----------------------+------+---------+-------+---------------------
| cinder-scheduler | openstack9 | nova | enabled | up | 2016-08-22T00:36:40.000000 | | | | - |
| cinder-volume | openstack9@cmodeiSCSI | nova | enabled | up | 2016-08-22T00:36:32.000000 | enabled | - | False | - |
+------------------+-----------------------+------+---------+-------+---------------------
Create a new volume type We’re almost there! In order to specify the volumes that you want to replicate, create a new volume type with the following extra-spec: replication_enabled = ‘<is> True’ Please note that the value of this extra-spec is case-sensitive. Here’s an example :
$ cinder type-create cheesecake-volumes
+--------------------------------------+--------------------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+--------------------+-------------+-----------+
| a4dbd246-2d94-45ba-bf57-88379e78fc8e | cheesecake-volumes | - | True |
+--------------------------------------+--------------------+-------------+-----------+
$ cinder type-list
+--------------------------------------+--------------------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+--------------------+-------------+-----------+
| a4dbd246-2d94-45ba-bf57-88379e78fc8e | cheesecake-volumes | - | True |
+--------------------------------------+--------------------+-------------+-----------+
$ cinder type-key cheesecake-volumes set replication_enabled=' True'
$ cinder type-key cheesecake-volumes set volume_backend_name='cmodeiSCSI'
$ cinder extra-specs-list
+--------------------------------------+--------------------+-----------------------------
| ID | Name | extra_specs |
+--------------------------------------+--------------------+-----------------------------
| a4dbd246-2d94-45ba-bf57-88379e78fc8e | cheesecake-volumes | {'replication_enabled': True', 'volume_backend_name': 'cmodeiSCSI'} |
+--------------------------------------+--------------------+-----------------------------
Now this is pretty obvious, but in case you have multiple back-ends, any volume created with the above extra spec will get created on your replication backend. If you don’t set the extra-spec, it may or may not end up on the replication backend depending on the Cinder scheduler.
In case you want to ensure that a specific Cinder volume does not get replicated, please set it up with the replication_enabled extra spec set to False : replication_enabled='<is> False' Please note that if the SVM has other FlexVols that are accessible, and are part of the netapp_pool_name_search_pattern parameter in the cinder.conf file, they will get replicated as well.
Failing Over
Ok now that you have everything set up, we’re ready to fail over! Before we begin though, please note that the Cheesecake implementation allows only a one-time failover option. Failing back is not as simple as fail over and requires some additional steps and considerations – we’ll cover more details in another blog post at a later time.
Also, it’s good practice to use System Manager to monitor vital details like the SnapMirror health, when it was last updated, etc. As of now, Cinder has no way to check these details for you.
Setting up Nova VM to test for fail-over success:
In order to test the failover operation, we will now boot a Nova VM from a Cinder volume on the replication backend. You may skip this section.
First, here’s a list of my Cinder backends and volumes:
$ cinder get-pools
+----------+-------------------------------------+
| Property | Value |
+----------+-------------------------------------+
| name | openstack9@cmodeiSCSI#cheesecake_02 |
+----------+-------------------------------------+
+----------+-------------------------------------+
| Property | Value |
+----------+-------------------------------------+
| name | openstack9@cmodeiSCSI#cheesecake_01 |
+----------+-------------------------------------+
$ cinder list
+--------------------------------------+-----------+----------+------+--------------------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------+------+--------------------
| 732a40f2-1311-4fb1-ae4c-147b4cdf03f5 | available | my_vol_2 | 3 | cheesecake-volumes | false | |
| b6bfbca4-03e9-4afb-af5f-6830d50ca827 | available | my_vol_1 | 2 | cheesecake-volumes | false | |
+--------------------------------------+-----------+----------+------+--------------------
Next, let’s boot the Nova VM:
$ nova boot --flavor 1 --image cirros-0.3.4-x86_64-uec --block-device source=volume,id=b6bfbca4-03e9-4afb-af5f-6830d50ca827,dest=volume,shutdown=preserve myCirrosNovaInstance
+--------------------------------------+-------------------------------------------------+
| Property | Value
+--------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
| OS-EXT-AZ:availability_zone |
| OS-EXT-SRV-ATTR:host | -
| OS-EXT-SRV-ATTR:hostname | mycirrosnovainstance
| OS-EXT-SRV-ATTR:hypervisor_hostname | -
| OS-EXT-SRV-ATTR:instance_name | instance-00000002
| OS-EXT-SRV-ATTR:kernel_id | 3e2348bd-341e-4a06-98ec-dabc35eec600
| OS-EXT-SRV-ATTR:launch_index | 0
| OS-EXT-SRV-ATTR:ramdisk_id | 34df5288-e26a-46fd-a2a0-293aa31dee4f
| OS-EXT-SRV-ATTR:reservation_id | r-j92c0yvp
| OS-EXT-SRV-ATTR:root_device_name | -
| OS-EXT-SRV-ATTR:user_data | -
| OS-EXT-STS:power_state | 0
| OS-EXT-STS:task_state | scheduling
| OS-EXT-STS:vm_state | building
| OS-SRV-USG:launched_at | -
| OS-SRV-USG:terminated_at | -
| accessIPv4 |
| accessIPv6 |
| adminPass | Lf2ySzJo9nnN
| config_drive |
| created | 2016-08-22T01:01:52Z
| description | -
| flavor | m1.tiny (1)
| hostId |
| host_status |
| id | c4bb39b8-c826-4396-991e-09da486d052d
| image | cirros-0.3.4-x86_64-uec (68745de9-4832-4e3a-900d-863141fb85da) |
| key_name | -
| locked | False
| metadata | {}
| name | myCirrosNovaInstance
| os-extended-volumes:volumes_attached | [{"id": "b6bfbca4-03e9-4afb-af5f-6830d50ca827", "delete_on_termination": false}] |
| progress | 0
| security_groups | default
| status | BUILD
| tags | []
| tenant_id | 417de83c946c41a5b3ea6b9a52c19fef
| updated | 2016-08-22T01:01:53Z
| user_id | 672b9acbc0bd4b1e924b4ca64b38bebf
+--------------------------------------+--------------------------------------------------
Failing over to a target:
You can failover by using the $cinder failover-host <hostname> --backend_id <failover target> command. If you have just 1 failover target, you can skip the part in the above command, but it’s good practice anyway.
Here’s an example:
$ cinder failover-host openstack9@cmodeiSCSI --backend_id=target_gouthamr_02
After receiving the command, Cinder will disable the service and send a call to the driver to initiate the failover process. The driver then breaks all SnapMirror relationships, and the FlexVols under consideration become Read/Write. The driver also marks the primary site as dead, and starts to proxy the target (secondary) site as the primary. So if you run $cinder service-list --withreplication again, you’ll notice that the service has been disabled.
$ cinder service-list --withreplication
+------------------+-----------------------+------+----------+-------+
| Binary | Host | Zone | Status | State | Updated_at | Replication Status | Active Backend ID | Frozen | Disabled Reason |
+------------------+-----------------------+------+----------+-------+---------------------------
| cinder-scheduler | openstack9 | nova | enabled | up | 2016-08-22T01:06:30.000000 | | | | - |
| cinder-volume | openstack9@cmodeiSCSI | nova | disabled | up | 2016-08-22T01:06:29.000000 | failed-over | target_gouthamr_02 | False | failed-over |
+------------------+-----------------------+------+----------+-------+---------------------------
If you need to re-enable the service so that new volumes can be created on the backend, you may do so using $cinder_service_enable command:
$ cinder service-enable openstack9@cmodeiSCSI cinder-volume
+-----------------------+---------------+---------+
| Host | Binary | Status |
+-----------------------+---------------+---------+
| openstack9@cmodeiSCSI | cinder-volume | enabled |
+-----------------------+---------------+---------+
NOTE:
For NFS, if using shares.conf to specify FlexVol mount paths, ensure that the NFS Data LIFs of the actual active_backend_id are reflected in the file and cinder volume service is restarted after a failover
Please note that since Cinder is proxying the secondary site (backend) as the primary, any new volumes that are created will have the backend-id (and other properties) of the first (primary) site.
To ensure that our services are still up after failing over, let’s try to attach our failed-over volume to a VM.
$ nova volume-attach myCirrosNovaInstance 732a40f2-1311-4fb1-ae4c-147b4cdf03f5
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | 732a40f2-1311-4fb1-ae4c-147b4cdf03f5 |
| serverId | c4bb39b8-c826-4396-991e-09da486d052d |
| volumeId | 732a40f2-1311-4fb1-ae4c-147b4cdf03f5 |
+----------+--------------------------------------+
$ cinder list
+--------------------------------------+--------+----------+------+--------------------+---------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+----------+------+--------------------+---------
| 732a40f2-1311-4fb1-ae4c-147b4cdf03f5 | in-use | my_vol_2 | 3 | cheesecake-volumes | false | c4bb39b8-c826-4396-991e-09da486d052d |
| b6bfbca4-03e9-4afb-af5f-6830d50ca827 | in-use | my_vol_1 | 2 | cheesecake-volumes | false | c4bb39b8-c826-4396-991e-09da486d052d |
+--------------------------------------+--------+----------+------+--------------------+---------
Limitations & Considerations
One of the limitations is that the failover process for the Cinder backends needs to be initiated manually.
Also, since Nova does not know about the primary site (backend) going down, you will most likely end up with Zombie volumes because of unresolved connections to the primary site! This can potentially cause some service-level disruption.
To get around it, we recommend that you reset the Cinder state using the $cinder reset-state command, and have a script re-attach volumes to your Nova VMs. You may even do it manually using the $nova volume-detach and the $nova volume-attach commands.
Your snapshot policies will also need to be added again on the secondary backend since they are not preserved during the failover process. Even though this is planned to be changed in the Ocata release of OpenStack, the default RPO as of today is 60 minutes, i.e. SnapMirror updates only take place after an hour has passed since the last successful replication. So please keep that in mind as you’re putting together your Disaster Recovery strategy.
Resources
1 ESG Research Review Data Protection Survey
... View more
By DeepakRaj
NetAppTech ONTAP Blogs
2 hours ago
16 Views
0
0
"This article was orignally published on Apr 3, 2017"
Without doubt, security is one of the most important concerns for your enterprise – and so should it be for your OpenStack cloud. Encryption not only helps to protect your data, but also ensure compliance.
Starting with the Ocata release of OpenStack, the NetApp Cinder driver supports NetApp’s software-based Volume Encryption (NVE) which allows you to encrypt on a per volume basis for greater flexibility, for example, when you need to encrypt certain volumes and not the entire storage array. While it’s recommended to get the latest NetApp driver through your OpenStack distribution, you can also find it on the upstream OpenStack repository: https://github.com/openstack/cinder
Configuring NVE requires you to install the associated license and enabling onboard key management. Before installing the license, you should determine whether your ONTAP version supports NVE. You can also find some really good details on it and the benefits that it brings to the table in the ONTAP 9 documentation, the NetApp Encryption Guide, and this Tech ONTAP Podcast.
From an OpenStack perspective, it’s actually pretty easy to set up!
Once you have volume encryption enabled on a backend, all that you need to do is set the netapp_flexvol_encryption extra-spec to ‘true’ for a new or existing volume-type.
Here’s an example of how you can create a new volume type with the netapp_flexvol_encryption extra-spec:
$ cinder type-create encrypted
$ cinder type-key encrypted set netapp_flexvol_encryption=true
Once that’s done, you can leverage this volume type to create encrypted Cinder volumes either through the ‘cinder create’ command, or through the Horizon dashboard.
That’s it! It’s really that simple!
You can now have data-at-rest encryption with NVE, in addition to the existing encryption solution provided by Cinder: https://docs.openstack.org/security-guide/tenant-data/data-encryption.html. Combining the two solutions can help enforce security for data in-transmission and at rest, thus ensuring that your data cannot be read even if the underlying device is lost, stolen, or repurposed.
So go ahead and give it a try to see how NetApp’s Volume Encryption can bring security to your OpenStack cloud without sacrificing flexibility and performance!
Stay tuned to learn more, and if you have any questions, let us know in the comments below or on Discord channel.
... View more
By DeepakRaj
NetAppTech ONTAP Blogs
2 hours ago
19 Views
0
0