2010-06-09

FreeBSD zfs performance

昨天用五年前主機測出的數據實在是很失望,調整 zfs 參數後寫入最多也只能跑到 35MB/s,今天換台主機測看看。

測試環境:
CPU:Intel Core 2 Duo E8400
RAM:2GB DDR2 x 3
MB:Gigabyte GA-EP45-UD3P
HDD:Hitach 1TB (7K1000.B) x 4
OS:FreeBSD 8.0R amd64

/boot/loader.conf
ahci_load="YES"
vm.kmem_size_max="1024M"
vm.kmem_size="1024M"
vfs.zfs.arc_max="100M" 
直接將昨天在別台主機建的 pool 裝上
# zpool import -f tank
測試
# dd if=/dev/zero of=/home/dslab/data/test bs=1m count=4000
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 30.429420 secs (137837133 bytes/sec)

# dd if=/home/dslab/data/test of=/dev/null bs=1m
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 22.393339 secs (187301412 bytes/sec)

# bonnie -d /home/dslab/data -s 4096 -m zfs
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
zfs      4096 79500 44.1 106960 20.8 60252 12.9 155730 70.2 170719 13.6 121.5  0.5
zfs      4096 78294 42.9 108320 20.8 58735 13.0 145792 64.4 167522 13.2 118.4  0.5
應該不用多說什麼了... 準備買新主機!!

FreeBSD zfs raidz

環境:P4 3.0GHz、2GB Ram、FreeBSD 8.0R i386、Hitachi 1TB x4

開機啟動 zfs
#echo 'zfs_enable="YES"' >> /etc/rc.conf 
設定參數 (參考 ZFSTuningGuide)
#ee /boot/loader.conf
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
初始化硬碟 (ad4、ad6、ad8、ad10)
#dd if=/dev/zero of=/dev/ad4 bs=1m count=1 
#fdisk -I /dev/ad4
#glabel label radz11 /dev/ad4s1
建立 raidz pool 並察看
#zpool create tank raidz /dev/label/raidz11 /dev/label/raidz12 /dev/label/raidz13 /dev/label/raidz14 
#zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
        NAME               STATE     READ WRITE CKSUM
        tank               ONLINE       0     0     0
         raidz1           ONLINE       0     0     0
            label/raidz11  ONLINE       0     0     0
            label/raidz12  ONLINE       0     0     0
            label/raidz13  ONLINE       0     0     0
            label/raidz14  ONLINE       0     0     0

errors: No known data errors

#df -H
Filesystem     Size    Used   Avail Capacity  Mounted on
tank           2.9T    0B      2.9T  0%           /tank

# zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank   3.64T   630K  3.64T     0%    ONLINE  -
建立 mount point
#zfs set mountpoint=/home/dslab/data tank/data
#df -H
Filesystem     Size    Used   Avail Capacity  Mounted on
tank/data      2.9T      0B    2.9T     0%         /home/dslab/data
tank           2.9T      0B    2.9T     0%         /tank
測試讀寫效能
#dd if=/dev/zero of=/home/dslab/data/test bs=1m count=4000
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 108.070644 secs (38810762 bytes/sec)

#dd if=/home/dslab/data/test of=/dev/null bs=1m
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 33.516918 secs (125139907 bytes/sec)

#bonnie -d /home/dslab/data -s 4096 -m zfs
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
zfs      4096 25813 36.6 33618 17.0 22683 11.5 44180 53.0 81775 15.9 107.6  1.0

結果不是很理想,應該是 cpu 不夠力。

WD15EARS on FreeBSD

看硬碟盒子中附的說明寫著 windows 之外的系統之接插上就可以用,不過還是去找了資料,最簡單的應該是用 gpart 吧。
#gpart create -s GPT ad10
ad10 created
#gpart show ad10
=>   34  2930277101  ad10  GPT  (1.4T)
     34  2930277101        - free -  (1.4T)
4k = 512x8,所以接著由 40 開始
#gpart add -b 40 -s 2930277095 -t freebsd-ufs ad10
ad10p1 added
# gpart show ad10
=>   34  2930277101  ad10  GPT  (1.4T)
     34           6        - free -  (3.0K)
     40  2930277095     1  freebsd-ufs  (1.4T)

#newfs -S 4096 -b 32768 -f 4096 -O 2 -U -m 1 -o space /dev/ad10p1
接著測看看速度
#dd if=/dev/zero of=/dev/ad10p1 bs=1m count=8k
8192+0 records in
8192+0 records out
8589934592 bytes transferred in 90.180949 secs (95252209 bytes/sec)
diskinfo 看到的 sectorsize 還是 512
#diskinfo -t /dev/ad10
/dev/ad10
        512             # sectorsize
        1500301910016   # mediasize in bytes (1.4T)
        2930277168      # mediasize in sectors
        2907021         # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        WD-WMAVU3127845 # Disk ident.

Seek times:
        Full stroke:      250 iter in   5.759155 sec =   23.037 msec
        Half stroke:      250 iter in   4.177206 sec =   16.709 msec
        Quarter stroke:   500 iter in   7.581140 sec =   15.162 msec
        Short forward:    400 iter in   2.664591 sec =    6.661 msec
        Short backward:   400 iter in   2.181377 sec =    5.453 msec
        Seq outer:       2048 iter in   0.273239 sec =    0.133 msec
        Seq inner:       2048 iter in   0.238800 sec =    0.117 msec
Transfer rates:
        outside:       102400 kbytes in   1.065353 sec =    96118 kbytes/sec
        middle:        102400 kbytes in   1.264815 sec =    80960 kbytes/sec
        inside:        102400 kbytes in   2.177371 sec =    47029 kbytes/sec