About EF & E-Series, SANtricity, and Related Plug-ins
Join conversations about NetApp EF/E-Series storage systems, SANtricity, and related plug-ins. Ask questions and get feedback from other users about these efficient storage systems optimized for demanding application-driven environments.
About EF & E-Series, SANtricity, and Related Plug-ins
Join conversations about NetApp EF/E-Series storage systems, SANtricity, and related plug-ins. Ask questions and get feedback from other users about these efficient storage systems optimized for demanding application-driven environments.
As many of you are aware there's a nice NetApp repo with an E-Series performance monitor, InfluxDB and Grafana put together as one monolithic docker-compose setup which runs all these things on one VM: https://github.com/netApp/eseries-perf-analyzer I created a friendly fork of it with Collector as a stand-alone container: https://github.com/scaleoutsean/eseries-perf-analyzer What this does is lets you - an E-Series admin or Ops guy - run and manage one Collector per one array, and not touch shared Grafana or InfluxDB containers. You still need InfluxDB (the older v1) to send data to, but it can be set up and managed by one team/person for the entire organization. So for example three teams can each have their own Collector and manage SANtricity password rotation without touching a shared instance or being able to access other people's E-Series. Upstream EPA v3..0.0 (current version) doesn't make it very easy to run multiple collectors. It is possible and done by default, in fact, but it makes use of SANtricity Web Service Proxy (WSP), which means one container that can manage all E-Series arrays, which many admins don't like: the moment you let one admin access WSP, they can accidentally access all arrays, break InfluxDB, etc. My fork removes Collector's dependency on WSP - all an admin has to do is run a Python script (inside or outside of a single container) - and specify a monitor (not admin) account of their E-Series array. If you already use EPA v3.0.0, it is possible to run additional Collectors outside and send data to existing EPA v3.0.0 without adding Grafana and InfluxDB instances. The fork retains upstream's Docker Compose with Grafana/InfluxDB and adds another Compose for a stand-alone Collector (or Collectors). The Collector container(s) can also be deployed to run on Kubernetes or run containerless. It needs less than 32 MB of RAM so it takes less than 1GB of memory to collect data from 32 arrays. In that way running one per each E-Series array is very affordable, given the security and operational benefits. Says I. EPA users who wish to contribute or have an opinion or question are welcome to leave their comment here.
... View more
I have E series, need enlarge vol size in Linux, therefore I must enlarge lun in ESeries, can some help me to enlarge lun, because I didn't see how enlarge lun in SANTricity. Thank Adhye
... View more
I just wanted to share that Rocky Linux 8 and 9 have been added to IMT for E-Series. Rocky Linux is designed to be bug-for-bug compatible with its upstream Linux distribution. * Rocky Linux - https://rockylinux.org/ (free download) * NetApp Interoperability Matrix: https://imt.netapp.com/ (login required)
... View more
Hello everyone, Around 5 years ago, I installed a SAN to serve as a repository for the VMs from 2 VMware servers. It was my first time working with a SAN, so I had numerous issues (post below). https://community.netapp.com/t5/EF-E-Series-SANtricity-and-Related-Plug-ins/First-storage-installation-NetApp-e-2824-questions-about-iSCSI-performance/m-p/141702#M619%3Fref_source=cty-other%20campaign%20%7Budf%7D--1957 At the end of the installation, the performance was acceptable, so we decided to stick with that. We used 1GB/s connections between the SAN (that had 10GB/s NICs), a switch, and the 2 VMware servers. Our needs increased and we had to keep adding VMs to that system and it started to be slow. I decided to do an upgrade and added 2 x 10GB/s switches and 10GB/s NICs to the 2 VMware servers, so right now all pieces of equipment in that environment are connected at 10GB/s instead of 1GB/s. I expected that just doing that, would give me not only HA (I added a separate path to each of those 2 switches) but also an increase in performance, but that hasn't happened and the performance seems kind of the same. We're currently using active/passive in VMware NICs and DelayedAck is disabled. Does anyone have any idea on what else I can investigate or if it has any good practice that I could now be aware of? Thanks, Carlos
... View more