It's been a while, once again. I have so many things to write about and very little time. Let's start with something small.
In my mind XenServer's storage system has seen a number of changes, some for the good and some not so good. Recently I've been seeing in the latest versions (above 6.0.2) that there have been some performance issues in this area. I first noticed this when running simultaneous system updates on more than two Linux machines, at which time every VM becomes very slow to respond.
Upon some additional tests at work, it would appear that beyond two fio instances showing a major performance decrease from over 100MB/s per VM in a two instance test, down to around 1-2MB/s per VM!!! This will not do.
A simple search led me to this document: Citrix XenServer 6.1.0 Storage Performance Guide
This is a never before seen document, at least to the public. At least I've never found one for other versions.
Within it, contains a detailed explaination of how the ring system works behind the scene, as well as explaination on modifiable variables. Based on some of the included diagnostic utilities, the test scenario was maxing out the blkback rings and blktap pool. Following are some changes made on XenServer 6.2 systems running LSI and Areca hardware RAID with 6 and 4 drives in RAID 10 and RAID5 respectively:
echo 3 > /sys/module/blkbk/parameters/max_ring_page_order
vi /opt/xensource/sm/blktap2.py (modify line 1180 to pool_size = 2816)
xe sr-param-set uuid=<sr-uuid> other-config:blkback-mem-pool-size-rings=8
This changes the blkback ring size to the maximum and increases the blktap pool size. Note that this theoretically would cost more memory per VM and is not supported by Citrix. After a reboot, I saw a different performance behavior. Testing with 1-2 nodes still resulted in the same read/write performance. However, increasing the number of test nodes acting on the same SR resulted in equally divided transfer rates for up to 6-8 VMs. After that, performance per VM drops once again but not nearly as miserable as previously. Note that the line number specified above is different from what is in XenServer 6.1.
Amazing! Now I don't have to worry about storage performance so much...not that I had to prior to XenServer 6.1.
Feedback awaiting moderation
This post has 1 feedback awaiting moderation...