General Discussion

netapp h610s-2


I have purchased this item (netapp h610s-2) second hand, from a local business, and i can not figure out how to set it up, i do think the main OS was removed from the device.


Every time I try to search i get the message that i am a guest and i do not have access to the resources. (can be very annoying as i own the hardware but i do not have access to support i need)

I have contacted support but i get pushed from one department to another, and unless I do not understand the business model, I almost feel that they do not want me to have access to software regardless if I Own the hardware.

I would appreciate some directions on who I need to contact to be able to gain access to the software that is made for the hardware purchased.



I don't think they don't want to you have access to software, it's more around the way Support works. Generally speaking when a buyer buys this product from NetApp they also get the custom software license, and buy at least 1Y of Support and Maintenance. The license and entitlement are tied to the buyer and their contact info. But if they sell it, that's where things get complicated. For example, even if the product is still under Maintenance, only the direct buyer can access it. If it's not, I'm not sure if the new buyer can buy maintenance even if they wanted to (I haven't checked).


Regarding the "box": it should work with Linux.




- This storage system was never shipped with a generic OS. It shipped with a custom appliance OS, which can be downloaded from the Support web site *if* the original buyer's maintenance contract is still in effect. If not, then they won't be able to login


- It takes at least 4 units of H610S-[1,2,4] to form a cluster. If you have 1 or 2, you won't be able to use them with the appliance OS (and 3 would be possible, but without node redundancy).


Your options are probably as follows:


- If you have 4 or more boxes, maybe you can buy eSDS ( which is licensed based on TB-month (example: 4 boxes x 24 TB per box x 12 months = 1152 "units"). You could as NetApp account team or partner to check this for you

- If you don't have 4 or more boxes, or don't have budget for s/w, try Ubuntu 22.04 or Rocky Linux 9 and see if you can make it work (maybe create sw RAID and use it as NFS or SMB or iSCSI server?). I haven't tried this but I think there's a chance it may work. The box has NVDIMM which may or may not be recognized by other OS, but NVMe drives and NICs should be, so I think it may be usable. OS can be installed on SATAdom (128GB, IIRC) but don't use that SATAdom for Linux swap file.



Thank you very much for the time spent to reply.


I did try windows server 2012 (can not format the NVME drives presented) and 2022 (crashing at setup stage with some PNP error)


I also did try ESXi 7 and ESXi 8 but every time I try to create a data store, it fails as it can not partition the drives.

I  am not sure if there are some drivers that require to be installed on ESXi to recognise the Hardware correctly, but as I had no access to the "downloads area" I have no idea what software/drivers are available for this hardware.

At this point in time, i am not looking to purchase additional boxes as i am not sure i understand the "how it works" also I have no project ready for a setup like this using 4 boxes.

The hardware is good 2 x 14 core Xeon gold CPUs and 512 Gb ram is not something I would put in the bin, if can be used.

I would mostly prefer the ESXI (but kills me as the drives can not be added to datastore )set-up on the box. but I would try the Linux OS






I'm not surprised that other OS's get confused - as I mentioned there are NVMDIMMs among other things. The h/w supports only Linux-based Element OS (with all the drivers included; as this model was ever only sold as an appliance, never as generic system for stand-alone (non-clustered) configurations) which explains why no one can do much about this situation.


I don't know if anyone has ever tried to make other OS work, but if I had to come up with some ideas:


- If NVDIMMs look like they can be easily removed, remove them and retry a recent Win, Lin or ESXi

- If you stil experience issues, maybe try latest Ubuntu Linux, take a photo of the crash screen and try to get help on Linux forums, maybe safe boot or some boot parameters can be used to avoid the error. Maybe even try a "Live" Linux distribution which may be able to boot without trying to install itself on disk drives.


> Does this server support a raid card? to be able to hook up the drives and manage them with the raid controller?


I don't think so [1]. The way data protection works for Element OS is two copies of data are made and stored on different nodes. That's why I suggested software RAID on Linux or Windows.


[1] Maybe you can make vROC works in BIOS, although I'm not even sure if that option is available. Also note that there are different ways vROC is enabled in different BIOS versions, etc. On some compute (not storage) nodes that are part of NetApp HCI portfolio vROC can be enabled *on SATAdom* boot disks like this. On other compute nodes, the steps may be yet different.

On H610S-[1,2,4], I'm nut sure if it works at all for either SATAdom or NVMe disks (you want to RAID NVMe disks, which is different from the video in which vROC is used to RAID SATAdom boot disks). But if you see vROC/RSTe in BIOS, you can experiment with it and see if you can make it work. One additional silly thing about vROC is that many BIOSes include a version that works with non-NVMe disks (such as SATA disks), but the same vROC won't be able to RAID NVMe disks (which H610S uses). So even if vROC is included and you can make it work, maybe it'll only work with SATAdom boot disks.




I have tested:

- esxi 6 7 8 all fail to format nvme drives (no datastore can be created.)

- windows server 2012 and 2022, both fail to initialise the drives. (installed all drivers available)

- Ubuntu Linux 22, failed to format drives.

It seems the drives are presented to the OS, but all OS fail to use the drives.

I have decided to keep testing the hardware with different options until new hardware arrives (box MB PSU), I will pull CPUs and ram and the rest sell or put in the bin from the NetApp server.

Thank you for the support received there.



Have you tried this on Ubuntu ( and what kind of error or /var/log/syslog output do you see when you try?

sudo apt install nvme-cli
nvme help
nvme --help
nvme --help format
sudo nvme list
sudo nvme format -s1 <device>


another question on the topic, if someone has the answer.

Does this server support a raid card? to be able to hook up the drives and manage them with the raid controller?
this way it can be used as a standalone server.



I had to reply and say thank you, for all the info provided.

And if I manage anything, will post here the progress and how to so others can use it.



windows server 22 attempt

Removed the 2 Smart ram sticks, and windows installed 🙂
Managed to see the NVMe storage, but when did try to initialize the drives, i get an error, cannot progress from here.

Not sure if the drives are encrypted? or they have some security on them or, whatever else it is stopping the OS to access them and format them, ESXi and windows have the same problem.
Any suggestion on the matter is welcome.



Good, so that NVDIMM guess was correct...

From that point on, it's all Windows. I'd just install a bunch of drivers - chipset, etc.

Here's an example:

Basically see what those devices are and get the drivers for the chipset, NICs, maybe even disks.

Yes the drives are encryption-capable, but I think you can erase them before you install.

I don't know how that works on Windows but you could use one of those Linux rescue CDs to format NVMe disks, then install Windows. But since you already have Windows installed, first try the driver thing.


Hi, I am also running into issues writing to any of the 12 NVME drives in a H610s. The symptom is that the operating system can read from the drives but not write to them. So can't create VMFS datastores, for example. Could it be that the disk controller is refusing writes due to un-flushed cache or something like that? I checked all the BIOS settings, there does not seem to be a menu item or way to access the disk controller. I have tried Ubuntu, it sees the NVME drives but can't modify the partition table. ESXi gives an error "failed to create VMFS datastore, cannot change the host configuration". Any help or ideas appreciated - thanks!


There's no need to guess when the system is under your control.


1) Can you create a new (some minimal size, such as 10GB) disk and login to that target from ESXi? You cannot, then SolidFire events should have some sort of error in Events/Logs. `Get-SFEvents` in PowerShell or you can even forward SolidFire cluster syslog to some Linux box for very verbose logs. If NVRAM (which is the read/write cache on SF nodes, NVDIMM on H610-S) is damaged I don't think that would not prominently appear in SolidFire UI as a Critical-level error. You may even see errors in IPMI Web UI since that's a h/w problem, but SolidFire captures h/w errors and surfaces those in its Events and Alerts, so looking directly in IPM probably won't help you discover anything new.


2) If you can, can ESXi format it, and if not, what error does it report (presumably can't access/write?) If ESXi can login but not write, then just focus on SolidFire, assuming nothing changed on the network since everything worked OK. But I'm not convinced this is about being unable to write to target.


> "ESXi gives an error "failed to create VMFS datastore, cannot change the host configuration".


There are several possible reasons for this. It does not say the disk is read-only.


If you can't solve it with ESXI, maybe access a new volume from a Linux VM connected to iSCSI network just to see if it can write (or reports some easier to understand error).



Hi, thanks for your reply. I should have been clearer. I want to install ESXi on the H610s, not use the H610s as a storage node. So the OS running on the H610s will be ESXi (homelab). I tried many options including nvme-cli, Samsung DC tools, etc, but could not write to or erase the drives. On a second H610s, I let the Netapp firmware boot, then chose to factory reset the node. I let that process complete then booted the system from ESXi USB install stick. I was able, on this system, to install ESXi to a second USB stick (leaving the internal SATA flash drive intact). The factory reset seems to have done the trick and removed the drive encryption that seemed to be present on the other system. So I can make VMFS filesystems on the NVME drives now.


I am exploring the idea that the Netapp might be using NVME drive encryption. I will look into using nvme-cli to try to remove the drive encryption and make the drive writeable.