mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-11-23 18:49:17 +08:00
Page:
Filer Server API
Pages
AWS CLI with SeaweedFS
AWS IAM CLI
Actual Users
Amazon IAM API
Amazon S3 API
Applications
Async Backup
Async Filer Metadata Backup
Async Replication to Cloud
Async Replication to another Filer
Benchmark SeaweedFS as a GlusterFS replacement
Benchmarks from jinleileiking
Benchmarks
Cache Remote Storage
Choosing a Filer Store
Client Libraries
Cloud Drive Architecture
Cloud Drive Benefits
Cloud Drive Quick Setup
Cloud Monitoring
Cloud Tier
Components
Configure Remote Storage
Customize Filer Store
Data Backup
Data Structure for Large Files
Deployment to Kubernetes and Minikube
Directories and Files
Docker Compose for S3
Docker Image Registry with SeaweedFS
Environment Variables
Erasure Coding for warm storage
Error reporting to sentry
FAQ
FIO benchmark
FUSE Mount
Failover Master Server
Filer Active Active cross cluster continuous synchronization
Filer Cassandra Setup
Filer Change Data Capture
Filer Commands and Operations
Filer Data Encryption
Filer JWT Use
Filer Metadata Events
Filer Redis Setup
Filer Server API
Filer Setup
Filer Store Replication
Filer Stores
Filer as a Key Large Value Store
Gateway to Remote Object Storage
Getting Started
HDFS via S3 connector
Hadoop Benchmark
Hadoop Compatible File System
Hardware
Hobbyest Tinkerer scale on premises tutorial
Home
Independent Benchmarks
Kubernetes Backups and Recovery with K8up
Large File Handling
Load Command Line Options from a file
Master Server API
Migrate to Filer Store
Mount Remote Storage
Optimization
Path Specific Configuration
Path Specific Filer Store
Production Setup
Replication
Run Blob Storage on Public Internet
Run Presto on SeaweedFS
S3 API Audit log
S3 API Benchmark
S3 API FAQ
S3 Bucket Quota
S3 Nginx Proxy
SRV Service Discovery
SeaweedFS Java Client
SeaweedFS in Docker Swarm
Security Configuration
Security Overview
Server Startup Setup
Store file with a Time To Live
Super Large Directories
System Metrics
TensorFlow with SeaweedFS
Tiered Storage
UrBackup with SeaweedFS
Use Cases
Volume Files Structure
Volume Management
Volume Server API
WebDAV
Words from SeaweedFS Users
fstab
nodejs with Seaweed S3
rclone with SeaweedFS
restic with SeaweedFS
run HBase on SeaweedFS
run Spark on SeaweedFS
s3cmd with SeaweedFS
weed shell
61
Filer Server API
zemul edited this page 2023-05-04 16:41:00 +08:00
You can append to any HTTP API with &pretty=y to see a formatted json output.
Filer server
POST/PUT/GET files
# Basic Usage:
//create or overwrite the file, the directories /path/to will be automatically created
POST /path/to/file
PUT /path/to/file
//create or overwrite the file, the filename in the multipart request will be used
POST /path/to/
//create or append the file
POST /path/to/file?op=append
PUT /path/to/file?op=append
//get the file content
GET /path/to/file
//return a json format subdirectory and files listing
GET /path/to/
Accept: application/json
# options for POST a file:
// set file TTL
POST /path/to/file?ttl=1d
// set file mode when creating or overwriting a file
POST /path/to/file?mode=0755
POST/PUT Parameter | Description | Default |
---|---|---|
dataCenter | data center | empty |
rack | rack | empty |
dataNode | data node | empty |
collection | collection | empty |
replication | replication | empty |
fsync | if "true", the file content write will incur an fsync operation (though the file metadata will still be separate) | false |
saveInside | if "true", the file content will write to metadata | false |
ttl | time to live, examples, 3m: 3 minutes, 4h: 4 hours, 5d: 5 days, 6w: 6 weeks, 7M: 7 months, 8y: 8 years | empty |
maxMB | max chunk size | empty |
mode | file mode | 0660 |
op | file operation, currently only support "append" | empty |
skipCheckParentDir | Ensuring parent directory exists cost one metadata API call. Skipping this can reduce network latency. | false |
header: Content-Type |
used for auto compression | empty |
header: Content-Disposition |
used as response content-disposition | empty |
prefixed header: Seaweed- |
example: Seaweed-name1: value1 . Returned as Seaweed-Name1: value1 in GET/HEAD response header. |
empty |
GET Parameter | Description | Default |
---|---|---|
metadata | get file/directory metadata | false |
resolveManifest | resolve manifest chunks | false |
notice
- It is recommended to add retries when writing to Filer.
AutoChunking
is not supported for methodPUT
. If the file length is greater than 256MB, only the leading 256MB in thePUT
request will be saved.- When appending to a file, each append will create one chunk and added to the file metadata. If there are too many small appends, there could be too many chunks. So try to keep each append size reasonably big.
Examples:
# Basic Usage:
> curl -F file=@report.js "http://localhost:8888/javascript/"
{"name":"report.js","size":866}
> curl "http://localhost:8888/javascript/report.js" # get the file content
> curl -I "http://localhost:8888/javascript/report.js" # get only header
...
> curl -F file=@report.js "http://localhost:8888/javascript/new_name.js" # upload the file to a different name
{"name":"report.js","size":5514}
> curl -T test.yaml http://localhost:8888/test.yaml # upload file by PUT
{"name":"test.yaml","size":866}
> curl -F file=@report.js "http://localhost:8888/javascript/new_name.js?op=append" # append to an file
{"name":"report.js","size":5514}
> curl -T test.yaml http://localhost:8888/test.yaml?op=append # append to an file by PUT
{"name":"test.yaml","size":866}
> curl -H "Accept: application/json" "http://localhost:8888/javascript/?pretty=y" # list all files under /javascript/
{
"Path": "/javascript",
"Entries": [
{
"FullPath": "/javascript/jquery-2.1.3.min.js",
"Mtime": "2020-04-19T16:08:14-07:00",
"Crtime": "2020-04-19T16:08:14-07:00",
"Mode": 420,
"Uid": 502,
"Gid": 20,
"Mime": "text/plain; charset=utf-8",
"Replication": "000",
"Collection": "",
"TtlSec": 0,
"UserName": "",
"GroupNames": null,
"SymlinkTarget": "",
"Md5": null,
"Extended": null,
"chunks": [
{
"file_id": "2,087f23051201",
"size": 84320,
"mtime": 1587337694775717000,
"e_tag": "32015dd42e9582a80a84736f5d9a44d7",
"fid": {
"volume_id": 2,
"file_key": 2175,
"cookie": 587534849
},
"is_gzipped": true
}
]
},
{
"FullPath": "/javascript/jquery-sparklines",
"Mtime": "2020-04-19T16:08:14-07:00",
"Crtime": "2020-04-19T16:08:14-07:00",
"Mode": 2147484152,
"Uid": 502,
"Gid": 20,
"Mime": "",
"Replication": "000",
"Collection": "",
"TtlSec": 0,
"UserName": "",
"GroupNames": null,
"SymlinkTarget": "",
"Md5": null,
"Extended": null
}
],
"Limit": 100,
"LastFileName": "jquery-sparklines",
"ShouldDisplayLoadMore": false
}
# get directory metadata
> curl 'http://localhost:8888/javascript/?metadata=true&pretty=yes'
{
"FullPath": "/javascript",
"Mtime": "2022-03-17T11:34:51+08:00",
"Crtime": "2022-03-17T11:34:51+08:00",
"Mode": 2147484141,
"Uid": 1001,
"Gid": 1001,
"Mime": "",
"TtlSec": 0,
"UserName": "",
"GroupNames": null,
"SymlinkTarget": "",
"Md5": null,
"FileSize": 0,
"Rdev": 0,
"Inode": 0,
"Extended": null,
"HardLinkId": null,
"HardLinkCounter": 0,
"Content": null,
"Remote": null,
"Quota": 0
}
# get file metadata
> curl 'http://localhost:8888/test01.py?metadata=true&pretty=yes'
{
"FullPath": "/test01.py",
"Mtime": "2022-01-09T19:11:18+08:00",
"Crtime": "2022-01-09T19:11:18+08:00",
"Mode": 432,
"Uid": 1001,
"Gid": 1001,
"Mime": "text/x-python",
"Replication": "",
"Collection": "",
"TtlSec": 0,
"DiskType": "",
"UserName": "",
"GroupNames": null,
"SymlinkTarget": "",
"Md5": "px6as5eP7tF5YcgAv5m60Q==",
"FileSize": 1992,
"Extended": null,
"chunks": [
{
"file_id": "17,04fbb55507b515",
"size": 1992,
"mtime": 1641726678984876713,
"e_tag": "px6as5eP7tF5YcgAv5m60Q==",
"fid": {
"volume_id": 17,
"file_key": 326581,
"cookie": 1426568469
},
"is_compressed": true
}
],
"HardLinkId": null,
"HardLinkCounter": 0,
"Content": null,
"Remote": null,
"Quota": 0
}
GET files
//get file with a different content-disposition
GET /path/to/file?response-content-disposition=attachment%3B%20filename%3Dtesting.txt
GET Parameter | Description | Default |
---|---|---|
response-content-disposition | used as response content-disposition | empty |
PUT/DELETE file tagging
# put 2 pairs of meta data
curl -X PUT -H "Seaweed-Name1: value1" -H "Seaweed-some: some string value" http://localhost:8888/path/to/a/file?tagging
# read the meta data from HEAD request
curl -I "http://localhost:8888/path/to/a/file"
...
Seaweed-Name1: value1
Seaweed-Some: some string value
...
# delete all "Seaweed-" prefixed meta data
curl -X DELETE http://localhost:8888/path/to/a/file?tagging
# delete specific "Seaweed-" prefixed meta data
curl -X DELETE http://localhost:8888/path/to/a/file?tagging=Name1,Some
Method | Request | Header | Operation |
---|---|---|---|
PUT | <file_url>?tagging | Prefixed with "Seaweed-" | set the meta data |
DELETE | <file_url>?tagging | remove all the "Seaweed-" prefixed header | |
DELETE | <file_url>?tagging=Some,Name | remove the headers "Seaweed-Some", "Seaweed-Name" |
Notice that the tag names follow http header key convention, with the first character capitalized.
Move files and directories
# move(rename) "/path/to/src_file" to "/path/to/dst_file"
> curl -X POST 'http://localhost:8888/path/to/dst_file?mv.from=/path/to/src_file'
POST Parameter | Description | Default |
---|---|---|
mv.from | move from one file or directory to another location | Required field |
Create an empty folder
Folders usually are created automatically when uploading a file. To create an empty file, you can use this:
curl -X POST "http://localhost:8888/test/"
List files under a directory
Some folder can be very large. To efficiently list files, we use a non-traditional way to iterate files. Every pagination you provide a "lastFileName", and a "limit=x". The filer locate the "lastFileName" in O(log(n)) time, and retrieve the next x files.
curl -H "Accept: application/json" "http://localhost:8888/javascript/?pretty=y&lastFileName=jquery-2.1.3.min.js&limit=2"
{
"Path": "/javascript",
"Entries": [
{
"FullPath": "/javascript/jquery-sparklines",
"Mtime": "2020-04-19T16:08:14-07:00",
"Crtime": "2020-04-19T16:08:14-07:00",
"Mode": 2147484152,
"Uid": 502,
"Gid": 20,
"Mime": "",
"Replication": "000",
"Collection": "",
"TtlSec": 0,
"UserName": "",
"GroupNames": null,
"SymlinkTarget": "",
"Md5": null,
"Extended": null
}
],
"Limit": 2,
"LastFileName": "jquery-sparklines",
"ShouldDisplayLoadMore": false
}
Parameter | Description | Default |
---|---|---|
limit | how many file to show | 100 |
lastFileName | the last file in previous batch | empty |
namePattern | match file names, case-sensitive wildcard characters '*' and '?' | empty |
namePatternExclude | nagetive match file names, case-sensitive wildcard characters '*' and '?' | empty |
Supported Name Patterns
The patterns are case-sensitive and support wildcard characters '*' and '?'.
Pattern | Matches |
---|---|
* | any file name |
*.jpg | abc.jpg |
a*.jp*g | abc.jpg, abc.jpeg |
a*.jp?g | abc.jpeg |
Deletion
Delete a file
> curl -X DELETE http://localhost:8888/path/to/file
Delete a folder
// recursively delete all files and folders under a path
> curl -X DELETE http://localhost:8888/path/to/dir?recursive=true
// recursively delete everything, ignoring any recursive error
> curl -X DELETE http://localhost:8888/path/to/dir?recursive=true&ignoreRecursiveError=true
// For Experts Only: remove filer directories only, without removing data chunks.
// see https://github.com/seaweedfs/seaweedfs/pull/1153
> curl -X DELETE http://localhost:8888/path/to?recursive=true&skipChunkDeletion=true
Parameter | Description | Default |
---|---|---|
recursive | if "recursive=true", recursively delete all files and folders | filer recursive_delete option from filer.toml |
ignoreRecursiveError | if "ignoreRecursiveError=true", ignore errors in recursive mode | false |
skipChunkDeletion | if "skipChunkDeletion=true", do not delete file chunks on volume servers | false |
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup Setup
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Messaging
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure