Tuesday, May 17, 2011

My top ten rules of Virtualization.

10. Never allow the OS to manage the memory. Always make it static.

9. Route as close to the virtual servers as possible.

8. You can strive as much as you want to improve your VMware infrastructure, more memory, flash disks, 40Gbe, but if you do not take a holistic approach and simplify the network from the servers to the servers and from the servers to the desktop, you are only going to realize a portion of those gains.

7. A hop is a hop is a hop. Don't let the network ugru, tell you with this seamless fabric there are no hops. Every time you leave a switch it is a hop.

6. You can never have enough memory on an esx host, always max the host regardless of cost. Once you start using APM and DPM, you will be amazed when during the weekend you have one server running in your datacenter and you save 4k every weekend and about .5K every night in data center power and cooling.

5. Don't add cores. Every time you add a core to a virtual your raise the cpu wait state by at least 10%. Optimize applications, distribute load, create another instance, do not add another VCPU.

4. Do not back up at the OS layer. Backup via VCB or at the storage with snapshots and ndmp. Never backup at the OS. Same applies in the physical world. Backup the data, you can re-provision in minutes what it takes hours to restore.

3. Never use RDMS. The fictional performance gains are not worth the lack of functionality.

2. Always install VMware tools in the OS, and use vxnet.

1. This is an absolute no bending. Never install a Windows / Linux / Solaris cluster of any kind in VMware.

Yes, I realize Guru is misspelled, as ugru, but come on have you actually met a network guru.

2 Comments:

Anonymous Anonymous said...

ugru, funny, but true. Why is it every place you go the network guys are no better than the last. Networking is such an easy skill, why do the good ones think they are worth so much money?

6:27 AM  
Anonymous M Hamock said...

I disagree with #3. We use RDM's quite a bit. What lack of functionality are you talking about? We use RDM's for any filesystem that is shared between multiple nodes in a fault tolerant cluster.

4:22 PM  

Post a Comment

<< Home