CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 647
Performance Benchmark
- Use kbench as the benchmarking tool.
- kbench enables
--idle-prof
which is harmful for the storage performance, so the--idle-prof
needs to be removed before benchmarking.
Cloud provider | Equinix |
Machine | m3.large.x86 |
OS | Ubuntu 24.04 (6.8.0-56-generic) |
Number of Nodes | 3 |
Storage | 1 NVMe SSD (Micron_9300_MTFDHAL3T8TDP) |
Metric | Parameters |
---|---|
IOPS | bs=4K, iodepth=128, numjobs=8, norandommap=1 |
Bandwidth | bs=128K, iodepth=16, numjobs=4 |
Latency | bs=4k, iodepth=1 |






Note
In v1.9.0, the UBLK frontend uses a single I/O queue and disables zero-copy, with no option to adjust these settings. This limitation is the primary cause of the I/O performance bottleneck when using the UBLK frontend.
Cloud provider | Equinix |
Machine | m3.large.x86 |
OS | Rocky Linux 9 (5.14.0-427.42.1.el9_4.x86_64) |
Number of Nodes | 3 |
Storage | 1 NVMe SSD (Micron_9300_MTFDHAL3T8TDP) |
Metric | Parameters |
---|---|
IOPS | bs=4K, iodepth=128, numjobs=8, norandommap=1 |
Bandwidth | bs=128K, iodepth=16, numjobs=4 |
Latency | bs=4k, iodepth=1 |




Cloud provider | Equinix |
Machine | m3.small.x86 |
OS | Rocky Linux 9 (5.14.0-427.24.1.el9_4.x86_64) |
Number of Nodes | 3 |
Storage | 1 SATA SSD (Micron_5300_MTFD) |
Metric | Parameters |
---|---|
IOPS | bs=4K, iodepth=128, numjobs=8, norandommap=1 |
Bandwidth | bs=128K, iodepth=16, numjobs=4 |
Latency | bs=4k, iodepth=1 |
Important
The spdk_tgt
for the v2 data engine is currently running on a single CPU core, which is responsible for managing multiple I/O queues, leading to limited I/O performance compared to the v1 data engine as outlined in this report. Longhorn v1.8 will introduce two features aimed at optimizing computing resource usage for spdk_tgt
to enhance I/O performance:
The preliminary I/O performance results with 2 CPU cores are illustrated in the following figures.
** Higher is better
** Higher is better
** Lower is better
- Machine: Dallas/m3.small.x86
- OS: Rocky Linux 9 / 5.15.0
- Network throughput between nodes (tested by iperf over 60 seconds): 25.0 Gbits/sec
- share-manager Pod and workload Pod are not located on the same node
- Machine: Japan/m3.small.x86
- OS: Ubuntu 22.04 / 5.15.0-33-generic
- Network throughput between nodes (tested by iperf over 60 seconds): 15.0 Gbits/sec
- share-manager Pod and workload Pod are not located on the same node
The baseline of the data disk was also measured using rancher/local-path-provisioner. The benchmarking results are in [Performance Investigation: kbench tab]
- Machine: Japan/m3.small.x86
- OS: Ubuntu 22.04 / 5.15.0-33-generic
- Network throughput between nodes (tested by iperf over 60 seconds): 15.0 Gbits/sec
- share-manager Pod and workload Pod are not located on the same node