磁碟讀寫測試與速度調整

參考資料

Disk I/O 永遠是系統效能的致命傷,尤其是作為儲存使用的磁碟,其讀寫測試相當重要的。

IOPS和Throughput輸送量兩個參數是衡量儲存性能的主要指標。

IOPS表示儲存每秒傳輸IO的數量,Throughput輸送量則表示每秒資料的傳輸總量

apt install fio ioping

隨機寫入測試

創建一個1GB的fio_rand_write文件,以4KB為單位、16個工作程序,對文件進行操作16次

fio --name fio_rand_write --direct=1 --rw=randwrite --bs=4k --size=1G --numjobs=16 --runtime=180 --group_reporting --refill_buffers --ioengine=libaio --iodepth=16

write: IOPS=674, BW=2700KiB/s (2765kB/s)(475MiB/180193msec); 0 zone resets

隨機讀取測試

fio --name fio_rand_read  --direct=1 --rw=randread  --bs=4k --size=1G --numjobs=16 --runtime=180 --group_reporting --refill_buffers --ioengine=libaio --iodepth=16

read: IOPS=5797, BW=22.6MiB/s (23.7MB/s)(4079MiB/180106msec)

隨機讀寫測試

建立一個4GB的test文件,以4KB為單位、75%/25% 的讀寫比例(每做1次寫操作3次讀操作)對文件進行操作64次,運行過程會提示當前進度及磁碟iops

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (groupid=0, jobs=1): err= 0: pid=25281: Tue Nov 12 14:55:46 2019
  read: IOPS=1264, BW=5059KiB/s (5181kB/s)(3070MiB/621348msec)
   bw (  KiB/s): min=  168, max= 8816, per=100.00%, avg=5059.90, stdev=1198.93, samples=1241
   iops        : min=   42, max= 2204, avg=1264.93, stdev=299.74, samples=1241
  write: IOPS=422, BW=1691KiB/s (1731kB/s)(1026MiB/621348msec); 0 zone resets
   bw (  KiB/s): min=    8, max= 3024, per=100.00%, avg=1693.60, stdev=404.84, samples=1239
   iops        : min=    2, max=  756, avg=423.38, stdev=101.22, samples=1239

ioping是一款磁碟IO的延遲監測工具,就像網路的延時監測工具ping一樣,用以測試磁碟回應速度。

ioping -c 100 /dev/md0

4 KiB <<< /dev/md0 (block device 43.7 TiB): request=18 time=15.0 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=19 time=22.0 ms (slow)
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=20 time=15.3 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=21 time=16.4 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=22 time=7.11 ms (fast)
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=23 time=10.6 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=24 time=12.7 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=25 time=13.7 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=26 time=11.7 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=27 time=8.45 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=28 time=9.12 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=29 time=10.3 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=30 time=9.05 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=31 time=11.8 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=32 time=11.8 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=33 time=11.3 ms
4 KiB <<< /dev/md0 (block device 43.7 TiB): request=34 time=12.0 ms
^C
--- /dev/md0 (block device 43.7 TiB) ioping statistics ---
33 requests completed in 425.6 ms, 132 KiB read, 77 iops, 310.2 KiB/s
generated 34 requests in 33.4 s, 136 KiB, 1 iops, 4.08 KiB/s
min/avg/max/mdev = 7.02 ms / 12.9 ms / 22.0 ms / 3.51 ms

fio 參數說明

--direct=1                  # non-buffered I/O
--rw=randrw                 # 隨機寫與讀
                            #   read
                            #   write
                            #   rw
                            #   randread
                            #   randwrite
                            #   randrw
--bs=4k                     # block size 檔案寫入大小
--size=4g                   # 測試的大小
--numjobs=16                # 同一個 workload 同時多少個 IO 請求
--runtime=180               # timeout 時間 (秒),就算 --size 的大小還未到達,還是結束
--ioengine=libaio           # IO 引擎
                            #   libaio - async IO 
                            #   ...
--rwmixwrite=50             # 混合讀寫時,寫佔 30% loading
--rwmixread=100             # 混合讀寫時,讀佔 100% loading
--refill_buffers            # 把 Buffer 填滿就不會跑到 Buffer 的值
--group_reporting           # 總表顯示
--iodepth=16                # 同一時間有多少 IO 在作存取
--name=fio_rand_write       # job 名稱,會在當下目錄產生暫存檔

sysctl dev.raid.speed_limit_min
dev.raid.speed_limit_min = 10000
root@omvNAS1:/srv/dev-disk-by-label-data/NFS# sysctl dev.raid.speed_limit_max
dev.raid.speed_limit_max = 200000
root@omvNAS1:/srv/dev-disk-by-label-data/NFS# sysctl dev.raid.speed_limit_max
dev.raid.speed_limit_max = 200000
root@omvNAS1:/srv/dev-disk-by-label-data/NFS# 
sysctl -w dev.raid.speed_limit_min=100000

sysctl -w dev.raid.speed_limit_max=500000
blockdev --getra /dev/md0
16384
blockdev --setra 65536 /dev/md0

Last updated