Tuesday, April 12, 2011

Basic I/O Monitoring on Linux

This is my fourth week at Pythian and in Canada and I’m starting to get back to my normal life cycle — my personal things are getting sorted and my working environment is set. Here at Pythian I’m in a team of four people together with Christo, Joe, and Virgil. (I should write another post about beginning at Pythian — will do one day.)
Yesterday, I asked Christo to show me how he monitors I/O on Linux. I needed to collect statistics on a large Oracle table on a production box, and wanted to keep an eye on the impact. So we grabbed Joe as well and sat all three around my PC. While we were discussing, Paul was around and showed some interest in the topic we discussed — otherwise, why would we all three be involved?. Anyway, Dave and Paul thought that this would be a nice case for a blog post. So here we are…
Indeed, while the technique we discuss here is basic, it gives a good overview and is very easy to use. So let get focused… We will use iostat utility. In case you need you know where to find more about it — right, man pages.
So we will use the following form of the command:
iostat -x [-d] <interval>
  • -x option displays extended statistics. You definitely want it.
  • -d is optional. It removes CPU utilization to avoid cluttering the output. If you leave it out, you will get the following couple lines in addition:
    avg-cpu:  %user   %nice    %sys %iowait   %idle
       6.79    0.00    3.79   16.97   72.46
  • is the number of seconds iostat waits between each report. Without a specified interval, iostat displays statistics since the system was up then exits, which is not useful in our case. Specifying the number of seconds causes iostat to print periodic reports where IO statistics are averaged for the time period since previous report. I.e., specifying 5 makes iostat dump 5 seconds of average IO characteristics, every 5 seconds until it’s stopped.
If you have many devices and you want to watch for only some of them, you can also specify device names on command line:
iostat -x -d sda 5
Now let’s get to the most interesting part — what those cryptic extended statistics are. (For readability, I formatted the report above so that the last two lines are in fact a continuation of the first two.)
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
sda          0.00  12.57 10.18  9.78  134.13  178.84    67.07
 
wkB/s avgrq-sz  avgqu-sz   await  svctm  %util
89.42    15.68      0.28   14.16   8.88  17.72
  • r/s and w/s— respectively, the number of read and write requests issued by processes to the OS for a device.
  • rsec/s and wsec/s — sectors read/written (each sector 512 bytes).
  • rkB/s and wkB/s — kilobytes read/written.
  • avgrq-sz — average sectors per request (for both reads and writes). Do the math — (rsec + wsec) / (r + w) = (134.13+178.84)/(10.18+9.78)=15.6798597
    If you want it in kilobytes, divide by 2.
    If you want it separate for reads and writes — do you own math using rkB/s and wkB/s.
  • avgqu-sz — average queue length for this device.
  • await — average response time (ms) of IO requests to a device. The name is a bit confusing as this is the total response time including wait time in the requests queue (let call it qutim), and service time that device was working servicing the requests (see next column — svctim). So the formula is await = qutim + svctim.
  • svctim — average time (ms) a device was servicing requests. This is a component of total response time of IO requests.
  • %util — this is a pretty confusing value. The man page defines it as, Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%. A bit difficult to digest. Perhaps it’s better to think of it as percentage of time the device was servicing requests as opposed to being idle. To understand it better here is the formula: utilization = ( (read requests + write requests) * service time in ms / 1000 ms ) * 100%
    or
    %util = ( r + w ) * svctim /10 = ( 10.18 + 9.78 ) * 8.88 = 17.72448
Traditionally, it’s common to assume that the closer to 100% utilization a device is, the more saturated it is. This might be true when the system device corresponds to a single physical disk. However, with devices representing a LUN of a modern storage box, the story might be completely different.
Rather than looking at device utilization, there is another way to estimate how loaded a device is. Look at the non-existent column I mentioned above — qutim — the average time a request is spending in the queue. If it’s insignificant, compare it to svctim — the IO device is not saturated. When it becomes comparable to svctim and goes above it, then requests are queued longer and a major part of response time is actually time spent waiting in the queue.
The figure in the await column should be as close to that in the svctim column as possible. If await goes much above svctim, watch out! The IO device is probably overloaded.
There is much to say about IO monitoring and interpreting results. Perhaps this is only the first of a series of posts about IO statistics. At Pythian we often come across different environments with specific characteristics and various requirements that our clients have. So stay tune — more to come.

Courtesy : http://www.pythian.com/news/247/basic-io-monitoring-on-linux/