Does anyone by any chance have a nice "cookbook" solution for handling existing Linux VM's using LVM or even new Linux VM's?
I've gone through TR-3749 as well as the VMware alignment document and while I'm comfortable enough with the fdisk steps they are fairly cumbersome from a customer perspective (whether for fixing Linux alignment when can't use mbralign or just for Linux alignment in general).
I'm thinking of working something up but am just pressed for time right now so thought I'd put this out there....
Thanks in advance for any replies.
In some of the above comments you suggest using a single, non-partitioned, disk for LVM and then lvolumes on top of that. I've been unabel to get kickstart to create such a setup, I can't define a pv partition using the part command to be an entire disk:
part pv.32 --onpart sdb
and if I pre-create the pvolume in %pre, the volgroup command fails. Same if I pre-create the vgroup in %pre, the logvol fails with a "no volume group exists with the name..." error.
The notes you link to don't seem to address this, any thoughts?
Sorry, I don't think kickstart can handle that type config (I'm happy to be wrong ).
Check out section 4.1 of this TR - http://media.netapp.com/documents/tr-3747.pdf where Jon Benedict describes how to create properly aligned partitions in kickstart.
In my personal opinion, LVM doesn't provide much additional benefit in virtual environments (or on NetApp luns). I suggest new machines be created with one vmdk file per filesystem. The only vmdk file in a linux vm that needs a Master Boot Record (MBR) is the boot drive (often mounted as /boot). The remaining drives do NOT need partitions at all (so no need to use fdisk). The filesystem can be created directly on the device. This results in an aligned filesystem. The filesystem can also be grown as needed, by growing the vmdk file (no need to mess with partition changes) and then growing the guest filesystem. This basically replaces the need for a volume manager.
Again, this is my personal opinion, but here is what I tend to do:
boot.vmdk (usually 256MB, a single partition starting at 64 sectors)
swap.vmdk (size depends on the memory in vm, no partition table)
root.vmdk (size depends on distribution, no partition table)
var.vmdk (size depends on applications hosted, no partition table)
Hope this helps.
So, I am extremely late coming back here....I do appreciate the input however.
The school I came from was basically...
/boot = 100 MB or so
LVM = rest of the space as resizing LV's inside LVM is just very easy
I must admit that dropping LVM for more vmdk's does make a good bit of sense....just push everything into an individual vmdk that you would have had as an individual LVM LV.
And...I hate to ask the obvious....but have you ever seen any issues with dropping LVM? (I'm just so comfortable with it personally that it still just seems really odd to drop it.)
Let me start by saying I really like LVM when dealing with physical disk devices. I was a Unix/Linux SysAdmin in a former life and took advantage of logical volume managers whenever working with physical disk or legacy disk arrays.
Its probably bad sport to answer my own challenge - "I'd be interested in hearing about any other features that may be lost with using this model.", but here goes anyway...
I can't get to the document anymore either, but there is some good news...
After some searching I found this link:
The content is also in the latest version of NetApp TR-3747 “Best Practices for File System Alignment in Virtual Environments” as well.
I hope this helps!
forgette wrote:I'd be interested in hearing about any other features that may be lost with using this model.
Host based mirroring between multiple storage arrays. Usually this is done to build disaster recovery solution but can also be to increase resiliency of local config (e.g. there are known low-end raids that won't support online firmware upgrade). Since ESX does not offer any form of hypervisor based disk mirroring, it has to be done inside VM => volume manager.