Thursday, July 7, 2011

ESXi performance issues

[2011-05-23]
Lately at work I’ve been putting some thought into how to make ESXi in our environment perform better. Don’t get me wrong, its doing what its designed to and advertises to be able to do. Its just that I would love to get closer to replacing ALL our servers with VM’s except for hypervisors and fileservers.
For lots of VM’s, the CPU speed and disk read/write speed are not an issue. For example we have a windows server 2003 hosting a website that reads and writes data to/from the production database server, and its fine as a VM.
However, we do compiles on a (different) server 2003 machine and what takes about 1h5mins on the physical machine is taking almost 2 hours in a exact copy (vmware converter) vm. The CPU is rarely maxed out, so the only conclusion is that its continual (small) reads and writes that get bogged down going against the nfs share on a Solaris RAIDZ array. RAIDZ is kind of awful for small frequent reads and writes (performance goes way up on big file transfers) and our “fileserver” is really a desktop machine with 8GB of memory so… you get what you pay for?
Another issue may be too many hands in the same cookie jar. Obviously when you’re using a single RAIDZ array for 5+ VM’s and they’re all doing disk I/O at roughly the same time your performance is going to go to crap. If we had the resources, I’d like to see how 5+ separate mirrored zpools would do in comparison (I would expect a significant increase but at the moment no way to test it). Another consideration might be a single large RAIDZ for vm’s not needing the performance, and 2 or 3 mirrored zpools of SSD’s. Only problem is the SSD’s are definitely out of the budget unless they come down in price a bit first.
Not to mention we also use the same RAIDZ array for CIFS shares for code, excel templates, documents, backups, all kinds of things.
So… what steps can be taken to get that 2 hour compile time reasonably closer to the 1 hour and 5 minutes of a physical (1U rack) server?
(more later and as additional optimization attempts are done)
[2011-07-07]
I just re-read this blog and I realize the obvious solution I didn't mention about adding a SATA SSD directly to the hypervisor. This would be great solution except for the fact that the hypervisor in question is a consumer HP desktop PC with no open drive bays and the $500 ish for the SSD is not in the budget.

No comments:

Post a Comment