Saving image with different sizes
Each image usually store one file key in database. However, one image can have several versions, e.g., thumbnail, small, medium, large, original. And each version of the same image will have a file key. It's not ideal to store all the keys.
One way to resolve this is here.
Reserve a set of file keys, for example, 5
curl http://<host>:<port>/dir/assign?count=5
{"fid":"3,01637037d6","url":"127.0.0.1:8080","publicUrl":"localhost:8080","count":5}
Save the 5 versions of the image to the volume server. The urls for each image can be:
http://<url>:<port>/3,01637037d6
http://<url>:<port>/3,01637037d6_1
http://<url>:<port>/3,01637037d6_2
http://<url>:<port>/3,01637037d6_3
http://<url>:<port>/3,01637037d6_4
Overwriting mime types
The correct way to send mime type:
curl -F "file=@myImage.png;type=image/png" http://127.0.0.1:8081/5,2730a7f18b44
The wrong way to send it:
curl -H "Content-Type:image/png" -F file=@myImage.png http://127.0.0.1:8080/5,2730a7f18b44
Securing SeaweedFS
The simple way is to front all master and volume servers with firewall.
The following white list option is deprecated. Please follow https://github.com/seaweedfs/seaweedfs/wiki/Security-Overview
A white list option can be used. Only traffic from the white list IP addresses have write permission.
weed master -whiteList="::1,127.0.0.1"
weed volume -whiteList="::1,127.0.0.1"
# "::1" is for IP v6 localhost.
Data Migration Example
weed master -mdir="/tmp/mdata" -defaultReplication="001" -ip="localhost" -port=9334
weed volume -dir=/tmp/vol1/ -mserver="localhost:9334" -ip="localhost" -port=8081
weed volume -dir=/tmp/vol2/ -mserver="localhost:9334" -ip="localhost" -port=8082
weed volume -dir=/tmp/vol3/ -mserver="localhost:9334" -ip="localhost" -port=8083
ls vol1 vol2 vol3
vol1:
1.dat 1.idx 2.dat 2.idx 3.dat 3.idx 5.dat 5.idx
vol2:
2.dat 2.idx 3.dat 3.idx 4.dat 4.idx 6.dat 6.idx
vol3:
1.dat 1.idx 4.dat 4.idx 5.dat 5.idx 6.dat 6.idx
stop all of them
move vol3/* to vol1 and vol2
it is ok to move x.dat and x.idx from one volumeserver to another volumeserver, because they are exactly the same. it can be checked by md5.
md5 vol1/1.dat vol2/1.dat
MD5 (vol1/1.dat) = c1a49a0ee550b44fef9f8ae9e55215c7
MD5 (vol2/1.dat) = c1a49a0ee550b44fef9f8ae9e55215c7
md5 vol1/1.idx vol2/1.idx
MD5 (vol1/1.idx) = b9edc95795dfb3b0f9063c9cc9ba8095
MD5 (vol2/1.idx) = b9edc95795dfb3b0f9063c9cc9ba8095
ls vol1 vol2 vol3
vol1:
1.dat 1.idx 2.dat 2.idx 3.dat 3.idx 4.dat 4.idx 5.dat 5.idx 6.dat 6.idx
vol2:
1.dat 1.idx 2.dat 2.idx 3.dat 3.idx 4.dat 4.idx 5.dat 5.idx 6.dat 6.idx
vol3:
start
weed master -mdir="/tmp/mdata" -defaultReplication="001" -ip="localhost" -port=9334
weed volume -dir=/tmp/vol1/ -mserver="localhost:9334" -ip="localhost" -port=8081
weed volume -dir=/tmp/vol2/ -mserver="localhost:9334" -ip="localhost" -port=8082
so we finished moving data of localhost:8083 to localhost:8081/localhost:8082
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup Setup
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Messaging
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure