Hi. This is great feedback. I'm one of the senior perf TSEs here in AMER and also have been working to improve our KB site....
I would say talking to the account team is definitely important here too. This is more an architecting question as to how to design/use the storage, and from the Support side we do the problems as we identify them.
A QoS policy is literally set it and see. I would say start with QoS and not worry about minimum throughputs or adaptive QoS just yet. Adaptive QoS has some things depending on ONTAP version (changed behavior in 9.7) as well as it will throttle volumes outside of the policy. Here's a KB on it: https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_is_Adaptive_QoS_and_how_does_it_work%3F
The qos statistics commands are live, and AIQUM will show the IOPs and throughput over a 5 minute policy. To literally set the QoS policy, it is covered in the "What is QoS" KB, but you just create the policy and apply to the volume. This talks about using AIQUM (you might try 9.7 or 9.8!) to analyze some of this: https://kb.netapp.com/Advice_and_Troubleshooting/Data_Infrastructure_Management/Active_IQ_Unified_Manager/How_to_monitor_volume_latency_from_ActiveIQ_...
A lot of customers use a three tier approach, and some of them use QoS on noisy neighbors (bully/shark workloads). You can definitely set it and see. You can fire up a test volume with a synthetic workload to see what it is like. Be careful not to set limits too low (5iops,5MB/s when application wants 40000iops,10000MB/s) otherwise it will overwhelm the network layer. Set it and monitor with qos statistics volume latency/performance show -volume <volume> -vserver <svm name>. Use both commands.
Let me know if this helps.