Recently I picked up some new, though end of life, 3.5” Seagate 2TB (Exos 7E8 ST2000NM0045) hard drives and a refurbished HPE H240 controller to go with them. Over the past weekend I got around to installing the controller and drives into my computer and began set things up for the first time. Below are notes on what I did and discovered along the way, as setting up this hardware is new to me in practice if not theory.

Installation and Software

I installed three drives and the controller in a Gentoo workstation running 5.14.8-gentoo. The only change I mad to the kernel for this was to build in the HP Smart Array SCSI driver as below. I may make a module at some future point, but for now it the driver is built in.

Device Drivers  ...>  
  SCSI device support  ...>  
    SCSI low-level drivers  ...>  
      <*>  HP Smart Array SCSI driver  

After rebuilding the kernel, I rebooted the machine, and installed hpssacli software which seems to be the software commonly used to manage the HPE h240 controller. Fortunately the software was in Portage so installation was simple a matter of running sudo emerge -av hpssacli. Software does appear to be available directly from the HPE website, but the Linux packages there are rpm and deb files, neither of which are particularly useful for me as I am neither Redhat nor Ubuntu bent.

$ sudo emerge -av hpssacli

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  N     ] sys-block/hpssacli-3.40.3.0::gentoo  0 KiB

Total: 1 package (1 new), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] y
>>> Verifying ebuild manifests
>>> Emerging (1 of 1) sys-block/hpssacli-3.40.3.0::gentoo
>>> Installing (1 of 1) sys-block/hpssacli-3.40.3.0::gentoo
>>> Recording sys-block/hpssacli in "world" favorites file...
>>> Jobs: 1 of 1 complete                           Load avg: 1.64, 1.29, 3.17
>>> Auto-cleaning packages...

>>> No outdated packages were found on your system.

 * GNU info directory index is up-to-date.
$

Various Useful Commands

  • Check if system recognized card.
    dmesg|grep -i h240
dmesg |grep -i h240
[    2.132083] hpsa 0000:05:00.0: scsi 0:0:0:0: added RAID              HP       H240             controller SSDSmartPathCap- En- Exp=1
[    2.132451] scsi 0:0:0:0: RAID              HP       H240             6.30 PQ: 0 ANSI: 5
  • Show configuration
    sudo hpssacli controller all show config
~ $ sudo hpssacli controller all show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 5, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)

~ $ 
  • Show status
    sudo hpssacli controller all show status
~ $ sudo hpssacli controller all show status

Smart HBA H240 in Slot 255 (RAID Mode)
   Controller Status: OK


~ $
  • Show detailed information on controller in slot 255
    sudo hpssacli controller slot=255 show detail
~ $ sudo hpssacli controller slot=255 show detail                                                                                                                                

Smart HBA H240 in Slot 255 (RAID Mode)
   Bus Interface: PCI
   Slot: 255
   Serial Number: PDNNK0ARH9M1F5
   Cache Serial Number: PDNNK0ARH9M1F5
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 6.30-0
   Firmware Supports Online Firmware Activation: False
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: False
   Drive Write Cache: Disabled
   Controller Memory Size: 0.2
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: hpsa
   Driver Version: 3.4.20
   Driver Supports SSD Smart Path: True
   PCI Address (Domain:Bus:Device.Function): 0000:05:00.0
   Negotiated PCIe Data Rate: PCIe 2.0 x8 (4000 MB/s)
   Controller Mode: RAID Mode
   Pending Controller Mode: RAID
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: USE6114FB2
   Sanitize Erase Supported: True
   Primary Boot Volume: None
   Secondary Boot Volume: None

~ $ 
  • Scan for new devices
    sudo hpssacli rescan
~ $ sudo hpssacli rescan
~ $
  • Show physical disk status for controller in slot 255
    sudo hpssacli controller slot=255 pd all show status`
 ~ $ sudo hpssacli controller slot=255 pd all show status

   physicaldrive 1I:0:5 (port 1I:box 0:bay 5, 2 TB): OK
   physicaldrive 1I:0:6 (port 1I:box 0:bay 6, 2 TB): OK
   physicaldrive 1I:0:7 (port 1I:box 0:bay 7, 2 TB): OK

~ $
  • Show physical disk detailed information for controller in slot 255 sudo hpssacli controller slot=255 pd all show detail
~ $ sudo hpssacli controller slot=255 pd all show detail                                                                                                                         

Smart HBA H240 in Slot 255 
                           
   Array A                                                                                                                                                                                     
      physicaldrive 1I:0:5
         Port: 1I
         Box: 0
         Bay: 5 
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS  
         Size: 2 TB          
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 7200             
         Firmware Revision: N004           
         Serial Number: ZC2342F10000C95236K1
         WWID: 5000C500AE318BC9            
         Model: SEAGATE ST2000NM0045      
         PHY Count: 2                    
         PHY Transfer Rate: 12.0Gbps, Unknown
         Drive Authentication Status: Not Applicable
         Sanitize Erase Supported: True            
         Unrestricted Sanitize Supported: True
         Shingled Magnetic Recording Support: None

      physicaldrive 1I:0:6
         Port: 1I
         Box: 0
         Bay: 6
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 7200
         Firmware Revision: N004
         Serial Number: ZC2341KE0000C95238CU
         WWID: 5000C500AE31C189
         Model: SEAGATE ST2000NM0045
         PHY Count: 2
         PHY Transfer Rate: 12.0Gbps, Unknown
         Drive Authentication Status: Not Applicable
         Sanitize Erase Supported: True
         Unrestricted Sanitize Supported: True
         Shingled Magnetic Recording Support: None

      physicaldrive 1I:0:7
         Port: 1I
         Box: 0
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 7200
         Firmware Revision: N004
         Serial Number: ZC2342EJ0000C95236LU
         WWID: 5000C500AE318CBD
         Model: SEAGATE ST2000NM0045
         PHY Count: 2
         PHY Transfer Rate: 12.0Gbps, Unknown
         Drive Authentication Status: Not Applicable
         Sanitize Erase Supported: True
         Unrestricted Sanitize Supported: True
         Shingled Magnetic Recording Support: None

~ $
  • Show logical disk status associated with controller in slot 255
    sudo hpssacli controller slot=255 ld all show status
~ $ sudo hpssacli controller slot=255 ld all show status

   logicaldrive 1 (3.64 TB, RAID 5): OK

~ $
  • Show detailed logical disk status associated with controller in slot 255
    sudo hpssacli controller slot=255 ld show detail
~ $ sudo hpssacli controller slot=255 ld all show detail

Smart HBA H240 in Slot 255

   Array A

      Logical Drive: 1
         Size: 3.64 TB
         Fault Tolerance: 5
         Heads: 255
         Sectors Per Track: 32
         Cylinders: 65535
         Strip Size: 256 KB
         Full Stripe Size: 512 KB
         Status: OK
         Unrecoverable Media Errors: None
         MultiDomain Status: OK
         Caching:  Disabled
         Parity Initialization Status: Initialization Completed
         Unique Identifier: 600508B1001C37F10BDE1911E307C368
         Disk Name: /dev/sda 
         Mount Points: /mnt/dasd3 3.6 TB Partition Number 1
         OS Status: LOCKED
         Logical Drive Label: 024920A2PDNNK0ARH9M1F56774
         Drive Type: Data
         LD Acceleration Method: All disabled


~ $
  • Toggle controller RAID mode on or off (HBA mode) ** Data loss and restart required. Be sure of backup and is needful.**
    sudo hpssacli controller slot=255 modify raidmode=[off|on]
$ sudo hpssacli controller slot=255 modify raidmode=off

Warning: Turning off Raid Mode will expose the physical drives to the operating
         system and RAID configuration will not be allowed. This also requires
         the server to be rebooted. Continue? (y/n)y

$ sudo hpssacli controller slot=255 modify raidmode=on
$
  • Create new RAID 0 logical drive
    sudo hpssacli controller slot=255 create type=ld drives=1I:0:5,1I:0:6 raid=0
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I


   Unassigned

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$ sudo hpssacli controller slot=255 create type=ld drives=1I:0:5,1I:0:6 raid=0
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 0, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$
  • Create new Raid 1 logical drive
    sudo hpssacli controller slot=255 create type=ld drives=1I:0:5,1I:0:6 raid=1
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I


   Unassigned

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$ sudo hpssacli controller slot=255 create type=ld drives=1I:0:5,1I:0:6 raid=1
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.82 TB, RAID 1, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$
  • Create new RAID 5 logical drive sudo hpssacli controller slot=255 create type=ld drives=1I:0:5,1I:0:6,1I:0:7 raid=5
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I


   Unassigned

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$ sudo hpssacli controller slot=255 create type=ld drives=1I:0:5,1I:0:6,1I:0:7 raid=5
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 5, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$
  • Delete logical volume
    sudo hpssacli ld 1 delete
$ sudo hpssacli controller slot=255 ld 1 show status

   logicaldrive 1 (3.64 TB, RAID 5): Transforming, 0.13% complete

$ sudo hpssacli controller slot=255 ld 1 delete

Warning: Deleting the specified device(s) will result in data being lost.
         Continue? (y/n) y

$ sudo hpssacli controller slot=255 ld 1 show status

Error: The specified device does not have a logicaldrive identified by "1"

$
  • Erase physical drive
    sudo hpssacli controller slot=255 pd 1I:0:5 modify erase
$ sudo hpssacli controller slot=255 pd 1I:0:8 modify erase

Warning: The erase process will begin immediately. All drive contents will be
         lost. Continue? (y/n) y

$
  • Add physical drive to logical volume
    sudo hpssacli controller slot=255 ld 1 add drives=1I:0:8
$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.82 TB, RAID 1, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, Erase In Progress)

$ sudo hpssacli controller slot=255 ld 1 add drives=1I:0:7

Warning: An even number of physical drives is required for this array because
         it has one or more logical drives with a fault tolerance of RAID
         1(+0). However, you can migrate all RAID 1(+0) logical drives on this
         array to the highest available fault tolerance. Would you like to
         migrate all RAID 1(+0) logical drives to the highest available fault
         tolerance? (y/n) y

$ sudo hpssacli controller slot=255 show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 2861545  MB)

      logicaldrive 1 (1.82 TB, RAID 5, Waiting for Transformation)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, Erase In Progress)

$
  • Add spare disks
    sudo hpssacli controller slot=255 array all add spares=1I:0:8
$ sudo hpssacli controller all show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 5, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$ sudo hpssacli controller slot=255 array all add spares=1I:0:8
$ sudo hpssacli controller all show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 5, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK, spare)

$
  • Remove spare disks
    sudo hpssacli controller slot=255 array all remove spares=1I:0:8
$ sudo hpssacli controller all show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 5, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK, spare)

$ sudo hpssacli controller slot=255 array all remove spares=1I:0:8
$ sudo hpssacli controller all show config

Smart HBA H240 in Slot 255 (RAID Mode)    (sn: PDNNK0ARH9M1F5)


   Port Name: 1I

   Port Name: 2I

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (3.64 TB, RAID 5, OK)

      physicaldrive 1I:0:5 (port 1I:box 0:bay 5, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:6 (port 1I:box 0:bay 6, SAS HDD, 2 TB, OK)
      physicaldrive 1I:0:7 (port 1I:box 0:bay 7, SAS HDD, 2 TB, OK)

   Unassigned

      physicaldrive 1I:0:8 (port 1I:box 0:bay 8, SAS HDD, 2 TB, OK)

$

Benchmarking

For benchmarking, I am using Flexible I/O Tester also known as fio. Seems this tool is in most distribution’s package repositories and this is the case for Gentoo. Another straight forward install from Portage that completed without issue.

Installation of Fio

$ sudo emerge -va fio

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  N     ] sys-block/fio-3.27-r1::gentoo  USE="aio curl gnuplot gtk io-uring numa python zlib -glusterfs -rbd -rdma (-static) -tcmalloc -test -zbc" PYTHON_TARGETS="python3_9 -python3_8" 0 KiB

Total: 1 package (1 new), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] y
>>> Verifying ebuild manifests
>>> Emerging (1 of 1) sys-block/fio-3.27-r1::gentoo
>>> Installing (1 of 1) sys-block/fio-3.27-r1::gentoo
>>> Recording sys-block/fio in "world" favorites file...
>>> Jobs: 1 of 1 complete                           Load avg: 2.94, 1.35, 0.96
>>> Auto-cleaning packages...

>>> No outdated packages were found on your system.

  GNU info directory index is up-to-date.
$

Fio Test Profile

This is the test profile I came up with. I am not a profesional tester and I had never used fio before yesterday, so I know it is not the best and may not even do what I think it does. My intent was to create a profile which ran four distinct tests, each repeated four times. Using 1024K blocksize, there are sequential read and sequential write test. Using 4K block size, there are random read and random wite tests.

; -- Start Big Blocks --
[global]
directory=/mnt/dasd3/
filename=test.file
ioengine=libaio
size=2g
io_size=10g
direct=1
group_reporting=1
loops=4
runtime=60

; reads big blocks job
[job0-rbb]
description="Big block reads"
stonewall
rw=read
blocksize=1024k
iodepth=32
fsync=10000

; writes big blocks job
[job1-wbb]
description="Big block writes"
stonewall
rw=write
blocksize=1024k
iodepth=32
fsync=10000

; reads 4k blocks job
[job2-r4b]
description="Random 4K block reads"
stonewall
rw=randread
blocksize=4k
iodepth=1
fsync=1

; writes 4k blocks job
[job3-w4b]
description="Random 4K block writes"
stonewall
blocksize=4k
rw=randwrite
iodepth=1
fsync=1

Sample Fio Output Using Profile

Below is the out put from a sample run of fio using the profile I created. This was run from my workstation while in normal use and is testing on a logical RAID 5 volume. At this point in time I am not able to judge if ferfomance is good or bad, but now I have a baseline of sorts to measure against.

$ sudo fio big_blocks.fio
job0-rbb: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
job1-wbb: (g=1): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
job2-r4b: (g=2): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
job3-w4b: (g=3): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.27
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)][60.8%][w=80KiB/s][w=20 IOPS][eta 01m:51s]
job0-rbb: (groupid=0, jobs=1): err= 0: pid=1629085: Tue Oct 12 05:54:23 2021
  Description  : ["Big block reads"]
  read: IOPS=543, BW=544MiB/s (570MB/s)(10.0GiB/18838msec)
    slat (usec): min=14, max=634, avg=32.74, stdev=21.00
    clat (msec): min=13, max=326, avg=58.78, stdev=15.85
     lat (msec): min=13, max=326, avg=58.82, stdev=15.86
    clat percentiles (msec):
     |  1.00th=[   39],  5.00th=[   45], 10.00th=[   48], 20.00th=[   52],
     | 30.00th=[   55], 40.00th=[   57], 50.00th=[   59], 60.00th=[   61],
     | 70.00th=[   63], 80.00th=[   64], 90.00th=[   66], 95.00th=[   68],
     | 99.00th=[  103], 99.50th=[  122], 99.90th=[  305], 99.95th=[  313],
     | 99.99th=[  326]
   bw (  KiB/s): min=336971, max=615631, per=100.00%, avg=558915.65, stdev=43867.88, samples=37
   iops        : min=  329, max=  601, avg=545.76, stdev=42.83, samples=37
  lat (msec)   : 20=0.02%, 50=15.06%, 100=83.81%, 250=0.80%, 500=0.31%
  cpu          : usr=0.36%, sys=1.80%, ctx=10141, majf=0, minf=539                         
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
job1-wbb: (groupid=1, jobs=1): err= 0: pid=1629582: Tue Oct 12 05:54:23 2021
  Description  : ["Big block writes"]
  write: IOPS=317, BW=318MiB/s (333MB/s)(10.0GiB/32248msec); 0 zone resets
    slat (usec): min=32, max=104041, avg=132.32, stdev=2262.03
    clat (msec): min=15, max=207, avg=100.56, stdev=13.64
     lat (msec): min=15, max=290, avg=100.70, stdev=13.69
    clat percentiles (msec):
     |  1.00th=[   57],  5.00th=[   87], 10.00th=[   95], 20.00th=[   97],
     | 30.00th=[   99], 40.00th=[  100], 50.00th=[  101], 60.00th=[  102],
     | 70.00th=[  103], 80.00th=[  104], 90.00th=[  106], 95.00th=[  111],
     | 99.00th=[  167], 99.50th=[  184], 99.90th=[  201], 99.95th=[  205],
     | 99.99th=[  207]
   bw (  KiB/s): min=274432, max=364544, per=100.00%, avg=325209.52, stdev=13222.21, samples=64
   iops        : min=  268, max=  356, avg=317.44, stdev=12.89, samples=64
  lat (msec)   : 20=0.02%, 50=0.72%, 100=49.30%, 250=49.96%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=855, max=855, avg=855.00, stdev= 0.00
    sync percentiles (nsec):
     |  1.00th=[  852],  5.00th=[  852], 10.00th=[  852], 20.00th=[  852],
     | 30.00th=[  852], 40.00th=[  852], 50.00th=[  852], 60.00th=[  852],
     | 70.00th=[  852], 80.00th=[  852], 90.00th=[  852], 95.00th=[  852],
     | 99.00th=[  852], 99.50th=[  852], 99.90th=[  852], 99.95th=[  852],
     | 99.99th=[  852]
  cpu          : usr=1.66%, sys=1.22%, ctx=10574, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
job2-r4b: (groupid=2, jobs=1): err= 0: pid=1630252: Tue Oct 12 05:54:23 2021
  Description  : ["Random 4K block reads"]
  read: IOPS=201, BW=806KiB/s (826kB/s)(47.2MiB/60001msec)
    slat (usec): min=11, max=697, avg=30.55, stdev=11.74
    clat (usec): min=127, max=43659, avg=4922.33, stdev=2858.34
     lat (usec): min=140, max=43696, avg=4953.51, stdev=2858.42
    clat percentiles (usec):
     |  1.00th=[  163],  5.00th=[  182], 10.00th=[ 1336], 20.00th=[ 2212],
     | 30.00th=[ 3097], 40.00th=[ 4015], 50.00th=[ 4948], 60.00th=[ 5800],
     | 70.00th=[ 6718], 80.00th=[ 7635], 90.00th=[ 8455], 95.00th=[ 8979],
     | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[23462], 99.95th=[34341],
     | 99.99th=[41681]
   bw (  KiB/s): min=  681, max=  936, per=99.97%, avg=806.61, stdev=53.49, samples=119
   iops        : min=  170, max=  234, avg=201.60, stdev=13.37, samples=119
  lat (usec)   : 250=6.28%, 500=0.25%, 750=0.06%, 1000=0.57%
  lat (msec)   : 2=10.50%, 4=22.12%, 10=59.81%, 20=0.30%, 50=0.11%
  cpu          : usr=0.23%, sys=0.58%, ctx=12329, majf=0, minf=14
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=12094,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1
job3-w4b: (groupid=3, jobs=1): err= 0: pid=1631632: Tue Oct 12 05:54:23 2021
  Description  : ["Random 4K block writes"]
  write: IOPS=20, BW=81.1KiB/s (83.1kB/s)(4868KiB/60021msec); 0 zone resets
    slat (nsec): min=34039, max=92245, avg=52357.03, stdev=3870.05
    clat (usec): min=10007, max=65911, avg=24272.28, stdev=5443.20
     lat (usec): min=10059, max=65961, avg=24325.66, stdev=5443.45
    clat percentiles (usec):
     |  1.00th=[11338],  5.00th=[15401], 10.00th=[17433], 20.00th=[20055],
     | 30.00th=[21627], 40.00th=[23200], 50.00th=[24511], 60.00th=[25822],
     | 70.00th=[26870], 80.00th=[28443], 90.00th=[30802], 95.00th=[32113],
     | 99.00th=[36439], 99.50th=[40109], 99.90th=[55837], 99.95th=[65799],
     | 99.99th=[65799]
   bw (  KiB/s): min=   71, max=   96, per=99.87%, avg=81.06, stdev= 4.78, samples=119
   iops        : min=   17, max=   24, avg=20.25, stdev= 1.22, samples=119
  lat (msec)   : 20=20.21%, 50=79.46%, 100=0.33%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=620, max=1391, avg=856.08, stdev=95.22
    sync percentiles (nsec):
     |  1.00th=[  708],  5.00th=[  740], 10.00th=[  756], 20.00th=[  780],
     | 30.00th=[  796], 40.00th=[  820], 50.00th=[  836], 60.00th=[  860],
     | 70.00th=[  892], 80.00th=[  924], 90.00th=[  980], 95.00th=[ 1048],
     | 99.00th=[ 1144], 99.50th=[ 1176], 99.90th=[ 1256], 99.95th=[ 1384],
     | 99.99th=[ 1384]
  cpu          : usr=0.09%, sys=0.10%, ctx=2433, majf=0, minf=12
  IO depths    : 1=199.9%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1217,0,1216 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=544MiB/s (570MB/s), 544MiB/s-544MiB/s (570MB/s-570MB/s), io=10.0GiB (10.7GB), run=18838-18838msec

Run status group 1 (all jobs):
  WRITE: bw=318MiB/s (333MB/s), 318MiB/s-318MiB/s (333MB/s-333MB/s), io=10.0GiB (10.7GB), run=32248-32248msec

Run status group 2 (all jobs):
   READ: bw=806KiB/s (826kB/s), 806KiB/s-806KiB/s (826kB/s-826kB/s), io=47.2MiB (49.5MB), run=60001-60001msec

Run status group 3 (all jobs):
  WRITE: bw=81.1KiB/s (83.1kB/s), 81.1KiB/s-81.1KiB/s (83.1kB/s-83.1kB/s), io=4868KiB (4985kB), run=60021-60021msec

Disk stats (read/write):
  sda: ios=32574/24134, merge=0/1221, ticks=1238171/2069336, in_queue=3307508, util=99.27%
$

The summary lines at the bottom of the report are all I need for now as a beginner. Large sequential reads have the best performance, followed by large sequentail writes. Random small writes have the worst perfomance off all. Below are the summary results of the test jobs from a second run, and they seem in line with the original results.

Results From a Second Run

Run status group 0 (all jobs):
   READ: bw=550MiB/s (577MB/s), 550MiB/s-550MiB/s (577MB/s-577MB/s), io=10.0GiB (10.7GB), run=18607-18607msec

Run status group 1 (all jobs):
  WRITE: bw=317MiB/s (332MB/s), 317MiB/s-317MiB/s (332MB/s-332MB/s), io=10.0GiB (10.7GB), run=32293-32293msec

Run status group 2 (all jobs):
   READ: bw=808KiB/s (827kB/s), 808KiB/s-808KiB/s (827kB/s-827kB/s), io=47.3MiB (49.6MB), run=60002-60002msec

Run status group 3 (all jobs):
  WRITE: bw=81.8KiB/s (83.8kB/s), 81.8KiB/s-81.8KiB/s (83.8kB/s-83.8kB/s), io=4912KiB (5030kB), run=60012-60012msec

Disk stats (read/write):
  sda: ios=32599/24176, merge=0/1235, ticks=1223915/2068508, in_queue=3292422, util=99.36%

Summary and Nest Steps

I’ve managed to set up a new SAS controller and drives in my workstation. I have worked out how to do performance testing of the drives. Here are a bunch of additional steps I need to itterate through to decide how I want to put my new disks into actual use.

  • Test performance of each of the supported hardware RAID settings of the card.
  • Use controller card to present JBOD to OS and then repeat performnce tests using software raidl
  • Learn how to genrate graphs of perfornace testing using fio
  • Investigate using SSD for caching.

References