1I/O statistics fields 2--------------- 3 4Since 2.4.20 (and some versions before, with patches), and 2.5.45, 5more extensive disk statistics have been introduced to help measure disk 6activity. Tools such as sar and iostat typically interpret these and do 7the work for you, but in case you are interested in creating your own 8tools, the fields are explained here. 9 10In 2.4 now, the information is found as additional fields in 11/proc/partitions. In 2.6, the same information is found in two 12places: one is in the file /proc/diskstats, and the other is within 13the sysfs file system, which must be mounted in order to obtain 14the information. Throughout this document we'll assume that sysfs 15is mounted on /sys, although of course it may be mounted anywhere. 16Both /proc/diskstats and sysfs use the same source for the information 17and so should not differ. 18 19Here are examples of these different formats: 20 212.4: 22 3 0 39082680 hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 23 3 1 9221278 hda1 35486 0 35496 38030 0 0 0 0 0 38030 38030 24 25 262.6 sysfs: 27 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 28 35486 38030 38030 38030 29 302.6 diskstats: 31 3 0 hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 32 3 1 hda1 35486 38030 38030 38030 33 34On 2.4 you might execute "grep 'hda ' /proc/partitions". On 2.6, you have 35a choice of "cat /sys/block/hda/stat" or "grep 'hda ' /proc/diskstats". 36The advantage of one over the other is that the sysfs choice works well 37if you are watching a known, small set of disks. /proc/diskstats may 38be a better choice if you are watching a large number of disks because 39you'll avoid the overhead of 50, 100, or 500 or more opens/closes with 40each snapshot of your disk statistics. 41 42In 2.4, the statistics fields are those after the device name. In 43the above example, the first field of statistics would be 446216. 44By contrast, in 2.6 if you look at /sys/block/hda/stat, you'll 45find just the eleven fields, beginning with 446216. If you look at 46/proc/diskstats, the eleven fields will be preceded by the major and 47minor device numbers, and device name. Each of these formats provides 48eleven fields of statistics, each meaning exactly the same things. 49All fields except field 9 are cumulative since boot. Field 9 should 50go to zero as I/Os complete; all others only increase (unless they 51overflow and wrap). Yes, these are (32-bit or 64-bit) unsigned long 52(native word size) numbers, and on a very busy or long-lived system they 53may wrap. Applications should be prepared to deal with that; unless 54your observations are measured in large numbers of minutes or hours, 55they should not wrap twice before you notice them. 56 57Each set of stats only applies to the indicated device; if you want 58system-wide stats you'll have to find all the devices and sum them all up. 59 60Field 1 -- # of reads completed 61 This is the total number of reads completed successfully. 62Field 2 -- # of reads merged, field 6 -- # of writes merged 63 Reads and writes which are adjacent to each other may be merged for 64 efficiency. Thus two 4K reads may become one 8K read before it is 65 ultimately handed to the disk, and so it will be counted (and queued) 66 as only one I/O. This field lets you know how often this was done. 67Field 3 -- # of sectors read 68 This is the total number of sectors read successfully. 69Field 4 -- # of milliseconds spent reading 70 This is the total number of milliseconds spent by all reads (as 71 measured from __make_request() to end_that_request_last()). 72Field 5 -- # of writes completed 73 This is the total number of writes completed successfully. 74Field 6 -- # of writes merged 75 See the description of field 2. 76Field 7 -- # of sectors written 77 This is the total number of sectors written successfully. 78Field 8 -- # of milliseconds spent writing 79 This is the total number of milliseconds spent by all writes (as 80 measured from __make_request() to end_that_request_last()). 81Field 9 -- # of I/Os currently in progress 82 The only field that should go to zero. Incremented as requests are 83 given to appropriate struct request_queue and decremented as they finish. 84Field 10 -- # of milliseconds spent doing I/Os 85 This field increases so long as field 9 is nonzero. 86Field 11 -- weighted # of milliseconds spent doing I/Os 87 This field is incremented at each I/O start, I/O completion, I/O 88 merge, or read of these stats by the number of I/Os in progress 89 (field 9) times the number of milliseconds spent doing I/O since the 90 last update of this field. This can provide an easy measure of both 91 I/O completion time and the backlog that may be accumulating. 92 93 94To avoid introducing performance bottlenecks, no locks are held while 95modifying these counters. This implies that minor inaccuracies may be 96introduced when changes collide, so (for instance) adding up all the 97read I/Os issued per partition should equal those made to the disks ... 98but due to the lack of locking it may only be very close. 99 100In 2.6, there are counters for each CPU, which make the lack of locking 101almost a non-issue. When the statistics are read, the per-CPU counters 102are summed (possibly overflowing the unsigned long variable they are 103summed to) and the result given to the user. There is no convenient 104user interface for accessing the per-CPU counters themselves. 105 106Disks vs Partitions 107------------------- 108 109There were significant changes between 2.4 and 2.6 in the I/O subsystem. 110As a result, some statistic information disappeared. The translation from 111a disk address relative to a partition to the disk address relative to 112the host disk happens much earlier. All merges and timings now happen 113at the disk level rather than at both the disk and partition level as 114in 2.4. Consequently, you'll see a different statistics output on 2.6 for 115partitions from that for disks. There are only *four* fields available 116for partitions on 2.6 machines. This is reflected in the examples above. 117 118Field 1 -- # of reads issued 119 This is the total number of reads issued to this partition. 120Field 2 -- # of sectors read 121 This is the total number of sectors requested to be read from this 122 partition. 123Field 3 -- # of writes issued 124 This is the total number of writes issued to this partition. 125Field 4 -- # of sectors written 126 This is the total number of sectors requested to be written to 127 this partition. 128 129Note that since the address is translated to a disk-relative one, and no 130record of the partition-relative address is kept, the subsequent success 131or failure of the read cannot be attributed to the partition. In other 132words, the number of reads for partitions is counted slightly before time 133of queuing for partitions, and at completion for whole disks. This is 134a subtle distinction that is probably uninteresting for most cases. 135 136More significant is the error induced by counting the numbers of 137reads/writes before merges for partitions and after for disks. Since a 138typical workload usually contains a lot of successive and adjacent requests, 139the number of reads/writes issued can be several times higher than the 140number of reads/writes completed. 141 142In 2.6.25, the full statistic set is again available for partitions and 143disk and partition statistics are consistent again. Since we still don't 144keep record of the partition-relative address, an operation is attributed to 145the partition which contains the first sector of the request after the 146eventual merges. As requests can be merged across partition, this could lead 147to some (probably insignificant) inaccuracy. 148 149Additional notes 150---------------- 151 152In 2.6, sysfs is not mounted by default. If your distribution of 153Linux hasn't added it already, here's the line you'll want to add to 154your /etc/fstab: 155 156none /sys sysfs defaults 0 0 157 158 159In 2.6, all disk statistics were removed from /proc/stat. In 2.4, they 160appear in both /proc/partitions and /proc/stat, although the ones in 161/proc/stat take a very different format from those in /proc/partitions 162(see proc(5), if your system has it.) 163 164-- ricklind@us.ibm.com 165