Table of Contents
weed shell
starts an interactive console to do some maintenance operations.
$ weed shell
> help
Type: "help <command>" for help on <command>. Most commands support "<command> -h" also for options.
cluster.check # check current cluster network connectivity
cluster.ps # check current cluster process status
cluster.raft.add # add a server to the raft cluster
cluster.raft.ps # check current raft cluster status
cluster.raft.remove # remove a server from the raft cluster
collection.delete # delete specified collection
collection.list # list all collections
ec.balance # balance all ec shards among all racks and volume servers
ec.decode # decode a erasure coded volume into a normal volume
ec.encode # apply erasure coding to a volume
ec.rebuild # find and rebuild missing ec shards among volume servers
fs.cat # stream the file content on to the screen
fs.cd # change directory to a directory /path/to/dir
fs.configure # configure and apply storage options for each location
fs.du # show disk usage
fs.ls # list all files under a directory
fs.meta.cat # print out the meta data content for a file or directory
fs.meta.changeVolumeId # change volume id in existing metadata.
fs.meta.load # load saved filer meta data to restore the directory and file structure
fs.meta.notify # recursively send directory and file meta data to notification message queue
fs.meta.save # save all directory and file meta data to a local file for metadata backup.
fs.mkdir # create a directory
fs.mv # move or rename a file or a folder
fs.pwd # print out current directory
fs.rm # remove file and directory entries
fs.tree # recursively list all files under a directory
fs.verify # recursively verify all files under a directory
lock # lock in order to exclusively manage the cluster
mount.configure # configure the mount on current server
mq.topic.list # print out all topics
remote.cache # cache the file content for mounted directories or files
remote.configure # remote storage configuration
remote.meta.sync # synchronize the local file meta data with the remote file metadata
remote.mount # mount remote storage and pull its metadata
remote.mount.buckets # mount all buckets in remote storage and pull its metadata
remote.uncache # keep the metadata but remote cache the file content for mounted directories or files
remote.unmount # unmount remote storage
s3.bucket.create # create a bucket with a given name
s3.bucket.delete # delete a bucket by a given name
s3.bucket.list # list all buckets
s3.bucket.quota # set/remove/enable/disable quota for a bucket
s3.bucket.quota.enforce # check quota for all buckets, make the bucket read only if over the limit
s3.circuitBreaker # configure and apply s3 circuit breaker options for each bucket
s3.clean.uploads # clean up stale multipart uploads
s3.configure # configure and apply s3 options for each bucket
unlock # unlock the cluster-wide lock
volume.balance # balance all volumes among volume servers
volume.check.disk # check all replicated volumes to find and fix inconsistencies. It is optional and resource intensive.
volume.configure.replication # change volume replication value
volume.copy # copy a volume from one volume server to another volume server
volume.delete # delete a live volume from one volume server
volume.deleteEmpty # delete empty volumes from all volume servers
volume.fix.replication # add or remove replicas to volumes that are missing replicas or over-replicated
volume.fsck # check all volumes to find entries not used by the filer
volume.list # list all volumes
volume.mark # Mark volume writable or readonly from one volume server
volume.mount # mount a volume from one volume server
volume.move # move a live volume from one volume server to another volume server
volume.tier.download # download the dat file of a volume from a remote tier
volume.tier.move # change a volume from one disk type to another
volume.tier.upload # upload the dat file of a volume to a remote tier
volume.unmount # unmount a volume from one volume server
volume.vacuum # compact volumes if deleted entries are more than the limit
volume.vacuum.disable # disable vacuuming request from Master, however volume.vacuum still works.
volume.vacuum.enable # enable vacuuming request from Master
volumeServer.evacuate # move out all data on a volume server
volumeServer.leave # stop a volume server from sending heartbeats to the master
For example:
$ weed shell
> fs.du /objects
block:2715 byte: 31895432 /objects
For most volume operations, you would need to prevent other possible concurrent operations. To do so, lock this way:
> lock
> volume.fix.replication
> volume.mount ...
> ...
> unlock
Another example: sometimes one of your volume server may go down, and a new volume server is added. Here is the command you can run to fix volumes that are under replicated:
# check any volume that are under replicated, and there are servers that meet the replica placement requirement
$ echo "lock; volume.fix.replication -n ; unlock" | weed shell
replicating volume 241 001 from localhost:8080 to dataNode 127.0.0.1:7823 ...
# found one, let's really do it
$ echo "lock; volume.fix.replication ; unlock" | weed shell
replicating volume 241 001 from localhost:8080 to dataNode 127.0.0.1:7823 ...
# all volumes are replicated now
$ echo "lock; volume.fix.replication -n ; unlock" | weed shell
no under replicated volumes
Check and Fix chunks replication
if use see in logs filechunk 2480,09a6290e6159aedd Not Found:
I0212 22:22:56.666094 filechunk_manifest.go:197 read http://fast-volume-1:8080/2480,09a6290e6159aedd failed, err: http://fast-volume-1:8080/2480,09a6290e6159aedd?readDeleted=true: 404 Not Found
volume.check.disk
> lock; volume.check.disk -v -force -slow -volumeId 2480 -nonRepairThreshold 1 -syncDeleted
load collection logs-data volume 2480 index size 2621232 from fast-volume-3:8080 ...
load collection logs-data volume 2480 index size 2621232 from fast-volume-1:8080 ...
volume 2480 fast-volume-1:8080 has 163827 entries, fast-volume-3:8080 missed 0 and partially deleted 0 entries
volume 2480 fast-volume-3:8080 has 163827 entries, fast-volume-1:8080 missed 0 and partially deleted 0 entries
volume.fsck
Search for files that are in the filler, but there are no chunks on the volume servers
lock;volume.fsck -findMissingChunksInFiler -verifyNeedles -collection logs-data -volumeId 2480 -v
checking directory /buckets/logs-data/2023-02-13
total 128 directories, 5148091 files
find missing file chunks in dataNodeId fast-volume-1:8080 volume 2480 ...
/buckets/logs-data/2022-10-10/39228128_2022-10-10.log
find missing file chunks in dataNodeId fast-volume-3:8080 volume 2480 ...
/buckets/logs-data/2022-10-10/39228128_2022-10-10.log
Find the chunk id 2480,09a6290e6159aedd:
> fs.meta.cat /buckets/logs-data/2022-10-10/39228128_2022-10-10.log
{
"name": "39228128_2022-10-10.log",
"isDirectory": false,
"chunks": [
{
"fileId": "2480,09a6290e6159aedd",
"offset": "0",
"size": "3884",
"modifiedTsNs": "1665409412570272776",
"eTag": "O5jilriNGhfJRBCCs+yU4g==",
"sourceFileId": "",
"fid": {
"volumeId": 2480,
"fileKey": "161884430",
"cookie": 1633267421
},
"sourceFid": null,
"cipherKey": "",
"isCompressed": false,
"isChunkManifest": false
}
],
"attributes": {
"fileSize": "3884",
"mtime": "1665409412",
"fileMode": 504,
"uid": 0,
"gid": 0,
"crtime": "1665409412",
"mime": "",
"ttlSec": 0,
"userName": "",
"groupName": [],
"symlinkTarget": "",
"md5": "",
"rdev": 0,
"inode": "0"
},
"extended": {},
"hardLinkId": "",
"hardLinkCounter": 0,
"content": "",
"remoteEntry": null,
"quota": "0"
}chunks 1 meta size: 124 gzip:152
fs.verify
Check the status of all chunks uploaded in the last hour:
fs.verify -v -modifyTimeAgo 1h
...
total 807944 directories, 121461080 files
verified 53218 files, error 0 files
In this case, you need a local incremental backup via asynchronous replication.
One more trick
You can skip the "fs." prefix, for all "fs.*" commands:
> fs.ls
dd.dat
topics
> ls
dd.dat
topics
> ls -al topics
drwxr-xr-x 0 chrislu staff 0 /topics/.system
total 1
> fs.du
block: 515 byte:10039099653 /
> du
block: 515 byte:10039099653 /
Run from Docker Image
weed shell
commands can also be run via the docker image, allowing an operator to perform maintenance commands.
docker run \
--rm \
-e SHELL_FILER=localhost:8888 \
-e SHELL_MASTER=localhost:9333 \
chrislusf/seaweedfs:local \
"shell" \
"fs.configure -locationPrefix=/buckets/foo -volumeGrowthCount=3 -replication=002 -apply"
Here shell
selects the Docker image entrypoint.
The arguments are fs.configure -locationPrefix=/buckets/foo -volumeGrowthCount=3 -replication=002 -apply
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup Setup
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Messaging
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure