FAQ
Can not upload due to "no free volumes left"
The symptom is similar to https://github.com/seaweedfs/seaweedfs/issues/1631 where the logs show
Nov 20 18:49:37 s2375.j weed[31818]: E1120 18:49:37 31818 filer_server_handlers_write.go:42] failing to assign a file id: rpc error: code = Unknown desc = No free volumes left!
Nov 20 18:49:37 s2375.j weed[31818]: I1120 18:49:37 31818 common.go:53] response method:PUT URL:/buckets/dev-passport-video-recordings/02342a46-7435-b698-2437-c778db34ef59.mp4 with httpStatus:500 and JSON:{"error":"rpc error: code = Unknown desc = No free volumes left!"}
Nov 20 18:49:37 s2375.j weed[31818]: E1120 18:49:37 31818 s3api_object_handlers.go:336] upload to filer error: rpc error: code = Unknown desc = No free volumes left!
Each bucket will create one collection, with at least 7 volumes by default, and each volume is pre-allocated with a large size, usually 30GB.
There are 2 ways to fix this.
- Reduce the global volume size by adjusting
-volumeSizeLimitMB
inweed master
CLI option. - Reduce the number of volumes to grow when a collection runs out of volumes. You can configure the per bucket storage this way in
weed shell
:
> fs.configure -locationPrefix=/buckets/ -volumeGrowthCount=1 -apply
This will add 1 physical volume when existing volumes are full. If using replication, you will need to add more volumes, so that it is a multiple of the number of replicas:
fs.configure -locationPrefix=/buckets/ -replication=001 -volumeGrowthCount=2 -apply
See https://github.com/seaweedfs/seaweedfs/wiki/Path-Specific-Configuration
Or you can change volume growth count in master config file (weed scaffold -config=master
), You have to change copy_x
values.
How to speed up bucket deletion?
One common unexpected problem is the deletion can be slow. To delete a file, we need to delete the file content on the volume servers and delete the file entry from the filer store. It is almost the same amount of work as adding a file. If there are millions of files, it can take a long time to delete.
When you need to create large buckets and delete them often, you may choose leveldb3
as the filer store, or any other stores that supports Fast Bucket Deletion in https://github.com/seaweedfs/seaweedfs/wiki/Filer-Stores
leveldb3
can automatically create a separate LevelDB instance for each bucket.
So bucket deletion is as simple as deleting the LevelDB instance files and the collection of volume files.
Having separate LevelDB instance, or separate SQL tables, will help to isolate the storage and improve performance also.
Filer Metadata Store Growth
Due to the semantics of the S3 API, empty directories (aka prefixes) aren't shown. However, an entry is still stored in the filer metadata store. When workload access patterns create many unique directories and then remove all the objects inside those directories, the filer metadata store can grow unbounded with orphaned directories. These directories are visible in the filer metadata store itself, but not using the S3 API.
If the filer argument -allowEmptyFolder=false
is set, the orphaned directories are cleaned up during list requests for non bucket-level directories. Normally this works well, but if the workload never performs a list operation, the orphaned directories may never be cleaned up. To force cleanup, simply list an existing, non bucket-level directory.
Example using rclone:
rclone lsf seaweedfs:my-bucket/dir
If the directory dir
exists in my-bucket
, the orphaned metadata will be cleaned up. Note that due to slight API usage differences, rclone ls
does not trigger cleanup, but rclone lsf
will.
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup Setup
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Messaging
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure