Active IQ Unified Manager Discussions

Grafana tool isnt showing any of the graphs and data

CSS
8,355 Views

Grafana tool isnt showing any of the graphs and data , upon checking my grafana debian server server logs i found the below error. Can some one let me know how to resolve this issue.???

 

“[2019-07-24 12:53:46] [WARNING] [workload_volume] update of data cache failed with reason: Aggregated instances requested for the workload_volume object exceeds the data capacity of the performance subsystem, because it includes 59088 constituent instances. With the current counter set, use the -node, -vserver, or -filter flags to include at most 40206 constituent instances in order to stay within the data capacity. Alternatively, requesting fewer counters will also reduce the required data and may allow more instances to be requested.

[2019-07-24 12:53:46] [WARNING] [workload_volume] data-list update failed.” Hence, you want to reduce the instances or split the instances to make the balancing workload on Grafana, so that the ONTAP would accept the request makes by the Grafana tool.

1 ACCEPTED SOLUTION

vachagan_gratian
8,118 Views

Thanks for the follow-up.

Would you like to set up a call somewhere next week? I really would like to get to the bottom of this case, but it's hard to reproduce in my own environment.

View solution in original post

11 REPLIES 11

vachagan_gratian
8,252 Views

Setting a custom batchsize for workload_volume might help.

 

- Open the template that your poller is using (you can find this in one of first log messages printed when your poller starts). E.g., if the template is [cdot-9.3.0.conf], open /opt/netapp-harvest/template/default/cdot-9.3.0.conf in a text editor.

 

- Find the section workload_volume, which would look something like this

    'workload_volume' =>
            {
                counter_list     => [ qw(instance_name instance_uuid
                                    ops read_ops write_ops
                                    total_data read_data write_data
                                    latency read_latency write_latency
                                    read_io_type sequential_reads sequential_writes
                                    ) ],
                graphite_leaf    => 'svm.{vserver}.workload.{\'policy-group\'}.{volume}.{qtree}.{lun}.{file}',
                plugin           => 'cdot-workload',
                plugin_leaf  => [ 'svm.{vserver}.qos_policy.{qos_policy}', 'svm.{vserver}.vol.{volume}.lun.{lun}',
                                    'svm.{vserver}.vol.{volume}.file.{file}', 'svm.{vserver}.vol.{volume}',
                                    'svm.{vserver}'
                                    ],
                plugin_options   => {'policy-group' => 1, 'workload' => 1},
                enabled          => '1'
            },

- Add the parameter batch_size => 250, on the top, save and restart Harvest.

    'workload_volume' =>
            {
                batch_size => 250,
                counter_list     => [ qw(instance_name instance_uuid ....
                ....

If this doesn't help, let us know!

CSS
8,240 Views

@vachagan_gratian  Thanks alot for taking your time and replying my query.

As you said I have changed as per the example show below. But still the same problem exits .

cat /opt/netapp-harvest/template/default/cdot-9.3.0.conf

 'workload_volume' =>
                        {
                                batch_size => 250,
                                counter_list     => [ qw(instance_name instance_uuid
                                                                        ops read_ops write_ops
                                                                        total_data read_data write_data
                                                                        latency read_latency write_latency
                                                                        read_io_type sequential_reads sequential_writes
                                                                        ) ],
                                graphite_leaf    => 'svm.{vserver}.workload.{\'policy-group\'}.{volume}.{qtree}.{lun}.{file}',
                                plugin           => 'cdot-workload',
                                plugin_leaf      => [ 'svm.{vserver}.qos_policy.{qos_policy}', 'svm.{vserver}.vol.{volume}.lun.{lun}',
                                                                        'svm.{vserver}.vol.{volume}.file.{file}', 'svm.{vserver}.vol.{volume}',
                                                                        'svm.{vserver}'
                                                                        ],
                                plugin_options   => {'policy-group' => 1, 'workload' => 1},
                                enabled          => '1'
                        },
 
______________________
 
Files:
 
root@fgprd-oncommand-graphite-app003:/opt/netapp-harvest/template/default# ls -lrt
total 640
-rw-r--r-- 1 root root  2280 Jan 17  2018 ocum-6.4.0.conf
-rw-r--r-- 1 root root  2251 Jan 17  2018 ocum-6.3.0.conf
-rw-r--r-- 1 root root  2251 Jan 17  2018 ocum-6.2.0.conf
-rw-r--r-- 1 root root  2251 Jan 17  2018 ocum-6.1.0.conf
-rw-r--r-- 1 root root 48549 Jan 17  2018 cdot-9.0.0.conf
-rw-r--r-- 1 root root 47514 Jan 17  2018 cdot-8.3.2.conf
-rw-r--r-- 1 root root 47529 Jan 17  2018 cdot-8.3.0.conf
-rw-r--r-- 1 root root 45162 Jan 17  2018 cdot-8.2.4.conf
-rw-r--r-- 1 root root 43908 Jan 17  2018 cdot-8.2.0.conf
-rw-r--r-- 1 root root 15851 Jan 17  2018 cdot-8.1.0.conf
-rw-r--r-- 1 root root  9910 Jan 17  2018 7dot-8.2.0.conf
-rw-r--r-- 1 root root  9913 Jan 17  2018 7dot-8.1.0.conf
-rw-r--r-- 1 root root  8912 Jan 17  2018 7dot-8.0.0.conf
-rw-r--r-- 1 root root  8545 Jan 17  2018 7dot-7.3.0.conf
-rw-r--r-- 1 root root 49686 Feb 16  2018 cdot-9.1.0.conf
-rw-r--r-- 1 root root 51058 Feb 16  2018 cdot-9.2.0.conf
-rw-r--r-- 1 root root  2280 Feb 16  2018 ocum-7.0.0.conf
-rw-r--r-- 1 root root  2280 Apr 27  2018 ocum-7.1.0.conf
-rw-r--r-- 1 root root  2280 Apr 27  2018 ocum-7.2.0.conf
-rw-r--r-- 1 root root 51058 Jan  7  2019 cdot-9.5.0.conf
-rw-r--r-- 1 root root 51058 Jan  7  2019 cdot-9.4.0.conf
-rw-r--r-- 1 root root 51058 Aug  7 06:16 cdot-9.3.0.bck_aug
-rw-r--r-- 1 root root 51082 Aug  7 06:27 cdot-9.3.0.conf
 
________________
 
 
restarted harvest service.
Recent logs :
 
[2019-08-07 07:00:54] [WARNING] [workload_volume] data-list update failed.
[2019-08-07 07:06:19] [WARNING] [workload_volume] update of data cache failed with reason: Aggregated instances requested for the workload_volume object exceeds the data capacity of the performance subsystem, because it includes 58248 constituent instances. With the current counter set, use the -node, -vserver, or -filter flags to include at most 40206 constituent instances in order to stay within the data capacity. Alternatively, requesting fewer counters will also reduce the required data and may allow more instances to be requested.

CSS
8,152 Views

Can you please help here ??

CSS
8,154 Views

This is how my grafana looks now without any data. even after setting batch size to 250.

vachagan_gratian
8,119 Views

Thanks for the follow-up.

Would you like to set up a call somewhere next week? I really would like to get to the bottom of this case, but it's hard to reproduce in my own environment.

CSS
8,068 Views

@ vachagan_gratian Thanks alot for that. will be greatful if we can connect during the below time if you are ok ?

 

 Join Metting :  http://tinyurl.com/css-storage-team

 Time in EST : 14/08/2019 11:00:00 AM to 11:30 AM

or

Time in EST : 15/08/2019 11:00:00 AM to 11:30 AM

Kindly confirm whichever time slot is convenient for you.

vachagan_gratian
8,066 Views

Cool, 14 August, 11:00 AM works fine for me. Speak to you then!

CSS
8,055 Views
Thanks very much for your kind help !!. See you then. Cheers, Raghu

CSS
7,804 Views

@vachagan_gratian I will open a call in next 10 minutes from now to discuss on Grafana issue,Thanks. Look forward to meet you.

meeting url : http://tinyurl.com/css-storage-team

email id : raghuraman.gopalakrishnan@csscorp.com

 

Thanks,

Raghu

vachagan_gratian
7,415 Views

Cool!

CSS
7,359 Views

Hello,

 

    Good Day !!

 

Did you able to find the solution on the second graph ? aggregate historical capacity.. We tried to chane some datasources on the json panel. Nothing worked out so far.

 

 

Thanks,

Raghu

Public