mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-11-23 18:49:17 +08:00
Page:
FIO benchmark
Pages
AWS CLI with SeaweedFS
AWS IAM CLI
Actual Users
Amazon IAM API
Amazon S3 API
Applications
Async Backup
Async Filer Metadata Backup
Async Replication to Cloud
Async Replication to another Filer
Benchmark SeaweedFS as a GlusterFS replacement
Benchmarks from jinleileiking
Benchmarks
Cache Remote Storage
Choosing a Filer Store
Client Libraries
Cloud Drive Architecture
Cloud Drive Benefits
Cloud Drive Quick Setup
Cloud Monitoring
Cloud Tier
Components
Configure Remote Storage
Customize Filer Store
Data Backup
Data Structure for Large Files
Deployment to Kubernetes and Minikube
Directories and Files
Docker Compose for S3
Docker Image Registry with SeaweedFS
Environment Variables
Erasure Coding for warm storage
Error reporting to sentry
FAQ
FIO benchmark
FUSE Mount
Failover Master Server
Filer Active Active cross cluster continuous synchronization
Filer Cassandra Setup
Filer Change Data Capture
Filer Commands and Operations
Filer Data Encryption
Filer JWT Use
Filer Metadata Events
Filer Redis Setup
Filer Server API
Filer Setup
Filer Store Replication
Filer Stores
Filer as a Key Large Value Store
Gateway to Remote Object Storage
Getting Started
HDFS via S3 connector
Hadoop Benchmark
Hadoop Compatible File System
Hardware
Hobbyest Tinkerer scale on premises tutorial
Home
Independent Benchmarks
Kubernetes Backups and Recovery with K8up
Large File Handling
Load Command Line Options from a file
Master Server API
Migrate to Filer Store
Mount Remote Storage
Optimization
Path Specific Configuration
Path Specific Filer Store
Production Setup
Replication
Run Blob Storage on Public Internet
Run Presto on SeaweedFS
S3 API Audit log
S3 API Benchmark
S3 API FAQ
S3 Bucket Quota
S3 Nginx Proxy
SRV Service Discovery
SeaweedFS Java Client
SeaweedFS in Docker Swarm
Security Configuration
Security Overview
Server Startup Setup
Store file with a Time To Live
Super Large Directories
System Metrics
TensorFlow with SeaweedFS
Tiered Storage
UrBackup with SeaweedFS
Use Cases
Volume Files Structure
Volume Management
Volume Server API
WebDAV
Words from SeaweedFS Users
fstab
nodejs with Seaweed S3
rclone with SeaweedFS
restic with SeaweedFS
run HBase on SeaweedFS
run Spark on SeaweedFS
s3cmd with SeaweedFS
weed shell
11
FIO benchmark
Chris Lu edited this page 2023-04-16 22:48:16 -07:00
Table of Contents
FIO Benchmark
Here is the result of using tool fio running on my personal laptop. Just for reference. Please benchmark with your own hardware.
- The server and mount are restarted before each run.
Prepare A file
Go to any mounted directory:
fio --randrepeat=1 --name=test --filename=fiotest --bs=128k --iodepth=1 --size=10G
Random Read
Random Read with 4KB, 128KB and 2MB block sizes, with direct IO enabled.
$ fio --randrepeat=1 --name=test --filename=fiotest --bs=4k --iodepth=1 --readwrite=randread --size=10G -direct=1
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=13.2MiB/s][r=3373 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=9828: Sun Apr 16 22:34:33 2023
read: IOPS=3007, BW=11.7MiB/s (12.3MB/s)(10.0GiB/871667msec)
clat (usec): min=52, max=15787, avg=328.89, stdev=117.07
lat (usec): min=53, max=15788, avg=329.22, stdev=117.09
clat percentiles (usec):
| 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277],
| 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302],
| 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 437], 95.00th=[ 586],
| 99.00th=[ 693], 99.50th=[ 750], 99.90th=[ 1139], 99.95th=[ 1467],
| 99.99th=[ 3097]
bw ( KiB/s): min= 7057, max=13892, per=100.00%, avg=12041.13, stdev=956.93, samples=1736
iops : min= 1764, max= 3473, avg=3010.03, stdev=239.26, samples=1736
lat (usec) : 100=0.07%, 250=0.11%, 500=91.31%, 750=8.01%, 1000=0.37%
lat (msec) : 2=0.11%, 4=0.02%, 10=0.01%, 20=0.01%
cpu : usr=1.62%, sys=9.58%, ctx=2704498, majf=0, minf=31
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=10.0GiB (10.7GB), run=871667-871667msec
$ fio --randrepeat=1 --name=test --filename=fiotest --bs=128k --iodepth=1 --readwrite=randread --size=10G -direct=1
test: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=283MiB/s][r=2260 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=45279: Sun Apr 16 22:42:53 2023
read: IOPS=2420, BW=303MiB/s (317MB/s)(10.0GiB/33840msec)
clat (usec): min=93, max=6284, avg=409.46, stdev=84.31
lat (usec): min=93, max=6285, avg=409.75, stdev=84.33
clat percentiles (usec):
| 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 379],
| 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 400],
| 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 465], 95.00th=[ 502],
| 99.00th=[ 619], 99.50th=[ 701], 99.90th=[ 1516], 99.95th=[ 1647],
| 99.99th=[ 2573]
bw ( KiB/s): min=268518, max=334080, per=100.00%, avg=310269.63, stdev=15104.91, samples=67
iops : min= 2097, max= 2610, avg=2423.64, stdev=118.06, samples=67
lat (usec) : 100=0.02%, 250=0.27%, 500=94.61%, 750=4.69%, 1000=0.16%
lat (msec) : 2=0.24%, 4=0.01%, 10=0.01%
cpu : usr=1.28%, sys=10.74%, ctx=83219, majf=5, minf=56
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=81920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=303MiB/s (317MB/s), 303MiB/s-303MiB/s (317MB/s-317MB/s), io=10.0GiB (10.7GB), run=33840-33840msec
$ fio --randrepeat=1 --name=test --filename=fiotest --bs=2m --iodepth=1 --readwrite=randread --size=10G -direct=1
test: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=528MiB/s][r=264 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=47911: Sun Apr 16 22:44:13 2023
read: IOPS=261, BW=522MiB/s (547MB/s)(10.0GiB/19614msec)
clat (usec): min=2216, max=19256, avg=3823.48, stdev=698.64
lat (usec): min=2216, max=19257, avg=3823.90, stdev=698.67
clat percentiles (usec):
| 1.00th=[ 2474], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3556],
| 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785],
| 70.00th=[ 3851], 80.00th=[ 4015], 90.00th=[ 4293], 95.00th=[ 4555],
| 99.00th=[ 5473], 99.50th=[ 7570], 99.90th=[13173], 99.95th=[15664],
| 99.99th=[19268]
bw ( KiB/s): min=333868, max=585728, per=99.76%, avg=533330.84, stdev=41527.52, samples=38
iops : min= 163, max= 286, avg=259.92, stdev=20.19, samples=38
lat (msec) : 4=80.10%, 10=19.65%, 20=0.25%
cpu : usr=0.23%, sys=37.77%, ctx=82174, majf=0, minf=541
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=522MiB/s (547MB/s), 522MiB/s-522MiB/s (547MB/s-547MB/s), io=10.0GiB (10.7GB), run=19614-19614msec
Sequential Read
Sequential Read with 4KB, 128KB and 2MB block sizes.
$ fio --randrepeat=1 --name=test --filename=fiotest --bs=4k --iodepth=1 --readwrite=read --size=10G
test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=512MiB/s][r=131k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=50525: Sun Apr 16 22:45:47 2023
read: IOPS=137k, BW=534MiB/s (560MB/s)(10.0GiB/19185msec)
clat (nsec): min=1000, max=24558k, avg=6732.85, stdev=140559.71
lat (nsec): min=1000, max=24558k, avg=6840.28, stdev=140559.60
clat percentiles (nsec):
| 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[ 1004],
| 20.00th=[ 2008], 30.00th=[ 2008], 40.00th=[ 2008],
| 50.00th=[ 2008], 60.00th=[ 2008], 70.00th=[ 2008],
| 80.00th=[ 2008], 90.00th=[ 2008], 95.00th=[ 2008],
| 99.00th=[ 4016], 99.50th=[ 5024], 99.90th=[3227648],
| 99.95th=[3457024], 99.99th=[4685824]
bw ( KiB/s): min=488126, max=583984, per=100.00%, avg=547183.82, stdev=29735.81, samples=38
iops : min=122031, max=145996, avg=136795.61, stdev=7433.90, samples=38
lat (usec) : 2=16.40%, 4=82.37%, 10=1.06%, 20=0.02%, 50=0.01%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.10%, 10=0.03%, 20=0.01%, 50=0.01%
cpu : usr=15.31%, sys=51.13%, ctx=87265, majf=0, minf=34
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=534MiB/s (560MB/s), 534MiB/s-534MiB/s (560MB/s-560MB/s), io=10.0GiB (10.7GB), run=19185-19185msec
$ fio --randrepeat=1 --name=test --filename=fiotest --bs=128k --iodepth=1 --readwrite=read --size=10G
test: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=746MiB/s][r=5970 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=53095: Sun Apr 16 22:47:16 2023
read: IOPS=5920, BW=740MiB/s (776MB/s)(10.0GiB/13837msec)
clat (usec): min=12, max=19573, avg=167.78, stdev=732.31
lat (usec): min=12, max=19573, avg=167.92, stdev=732.30
clat percentiles (usec):
| 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 19],
| 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 22],
| 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 26], 95.00th=[ 34],
| 99.00th=[ 3556], 99.50th=[ 3884], 99.90th=[ 5407], 99.95th=[ 6194],
| 99.99th=[13566]
bw ( KiB/s): min=670984, max=804864, per=100.00%, avg=757783.63, stdev=29219.46, samples=27
iops : min= 5242, max= 6288, avg=5919.85, stdev=228.22, samples=27
lat (usec) : 20=43.81%, 50=51.70%, 100=0.25%, 250=0.07%, 500=0.01%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=3.75%, 10=0.41%, 20=0.02%
cpu : usr=1.08%, sys=55.90%, ctx=82171, majf=0, minf=64
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=81920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=740MiB/s (776MB/s), 740MiB/s-740MiB/s (776MB/s-776MB/s), io=10.0GiB (10.7GB), run=13837-13837msec
$ fio --randrepeat=1 --name=test --filename=fiotest --bs=2m --iodepth=1 --readwrite=read --size=10G
test: (g=0): rw=read, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=742MiB/s][r=371 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=54427: Sun Apr 16 22:48:05 2023
read: IOPS=358, BW=716MiB/s (751MB/s)(10.0GiB/14296msec)
clat (usec): min=212, max=22137, avg=2786.25, stdev=2629.95
lat (usec): min=212, max=22137, avg=2786.62, stdev=2629.78
clat percentiles (usec):
| 1.00th=[ 229], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 310],
| 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 4047], 60.00th=[ 4686],
| 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5735], 95.00th=[ 6259],
| 99.00th=[ 7570], 99.50th=[ 8979], 99.90th=[19268], 99.95th=[20317],
| 99.99th=[22152]
bw ( KiB/s): min=643308, max=789884, per=100.00%, avg=733764.32, stdev=41416.68, samples=28
iops : min= 314, max= 385, avg=357.79, stdev=20.15, samples=28
lat (usec) : 250=2.81%, 500=46.31%, 750=0.68%, 1000=0.10%
lat (msec) : 2=0.06%, 4=0.02%, 10=49.63%, 20=0.33%, 50=0.06%
cpu : usr=0.27%, sys=56.58%, ctx=82517, majf=0, minf=544
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=716MiB/s (751MB/s), 716MiB/s-716MiB/s (751MB/s-751MB/s), io=10.0GiB (10.7GB), run=14296-14296msec
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup Setup
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Messaging
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure