Updated Words from SeaweedFS Users (markdown)

Chris Lu 2021-03-21 12:15:03 -07:00
parent 5379149b79
commit 0864b09c49

@ -1,6 +1,6 @@
| Use cases | Details | Comments | | Use cases | Details | Comments |
| ---- | -- | -- | | ---- | -- | -- |
| OStoreBench: Open source Benchmarking Distributed Object Storage Systems Using Real-word Application Scenarios, Benchmark SeaweedFS with CEPH, Swift | [Source Code](https://github.com/EVERYGO111/OStoreBench), [Published Paper](https://github.com/EVERYGO111/OStoreBench/blob/master/research%20paper-OStoreBench.pdf) | OStoreBench: The performance of | [Source Code](https://github.com/EVERYGO111/OStoreBench), [Published Paper](https://github.com/EVERYGO111/OStoreBench/blob/master/research%20paper-OStoreBench.pdf) | OStoreBench: Open source Benchmarking Distributed Object Storage Systems Using Real-word Application Scenarios, Benchmark SeaweedFS with CEPH, Swift | OStoreBench: The performance of
SeaweedFS is the best in three typical scenarios compared to Ceph and Swift. | SeaweedFS is the best in three typical scenarios compared to Ceph and Swift. |
| replaced ceph with a seaweedfs under the docker registry in production | Under the registry half a million files. Not big but have intensive exchange. | Killer feature of seaweedfs is that it disign like S3 in yandex and can work in k8s and spread between data centers. Ceph has a bad design in the case of using a huge number of small files over 10 million, cluster recovery takes several days. The next step is to use instead of Glusterfs, which is now barely alive and is bent from 10 million files. | | replaced ceph with a seaweedfs under the docker registry in production | Under the registry half a million files. Not big but have intensive exchange. | Killer feature of seaweedfs is that it disign like S3 in yandex and can work in k8s and spread between data centers. Ceph has a bad design in the case of using a huge number of small files over 10 million, cluster recovery takes several days. The next step is to use instead of Glusterfs, which is now barely alive and is bent from 10 million files. |
| we use seaweedfs embedded in our AI products that are deployed on client site (usually AirGapped because of the sensitivity of the data)| clusters ranging from 3-10 servers (and now startiting to get bigger and bigger), usually retaining 7-14 days video and 30-60 days of thumbnails | we comared CEPH & Minio, we checked deployment procedure & maintenance and especially performance of writes and especially single server performance and easy scale out. we went and found that seaweedfs always won. we mainly write intensive and rarely read (usually reading as soon as write, so no real disk access) and 95% of the data is not missing critical, so the easiness of seaweedfs and the amazing performance (all writes are sequential as possible) | | we use seaweedfs embedded in our AI products that are deployed on client site (usually AirGapped because of the sensitivity of the data)| clusters ranging from 3-10 servers (and now startiting to get bigger and bigger), usually retaining 7-14 days video and 30-60 days of thumbnails | we comared CEPH & Minio, we checked deployment procedure & maintenance and especially performance of writes and especially single server performance and easy scale out. we went and found that seaweedfs always won. we mainly write intensive and rarely read (usually reading as soon as write, so no real disk access) and 95% of the data is not missing critical, so the easiness of seaweedfs and the amazing performance (all writes are sequential as possible) |