mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-12-25 02:07:55 +08:00
commit
d892cad15d
32
README.md
32
README.md
@ -81,17 +81,15 @@ SeaweedFS is a simple and highly scalable distributed file system. There are two
|
|||||||
1. to store billions of files!
|
1. to store billions of files!
|
||||||
2. to serve the files fast!
|
2. to serve the files fast!
|
||||||
|
|
||||||
SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages file volumes, and it lets these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (just one disk read operation).
|
SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages file volumes, and it lets these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (O(1), usually just one disk read operation).
|
||||||
|
|
||||||
|
SeaweedFS can transparently integrate with the cloud. With hot data on local cluster, and warm data on the cloud with O(1) access time, SeaweedFS can achieve both fast local access time and elastic cloud storage capacity, without any client side changes.
|
||||||
|
|
||||||
There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.
|
There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.
|
||||||
|
|
||||||
SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf). Also, SeaweedFS implements erasure coding with ideas from [f4: Facebook’s Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf)
|
SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf). Also, SeaweedFS implements erasure coding with ideas from [f4: Facebook’s Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf)
|
||||||
|
|
||||||
SeaweedFS can work very well with just the object store. [[Filer]] can then be added later to support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql/Postgres/Redis/Etcd/Cassandra/LevelDB.
|
On top of the object store, optional [Filer] can support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql, Postgres, Redis, Etcd, Cassandra, LevelDB, MemSql, TiDB, TiKV, CockroachDB, etc.
|
||||||
|
|
||||||
[Back to TOC](#table-of-contents)
|
|
||||||
|
|
||||||
## Features ##
|
|
||||||
|
|
||||||
[Back to TOC](#table-of-contents)
|
[Back to TOC](#table-of-contents)
|
||||||
|
|
||||||
@ -104,8 +102,10 @@ SeaweedFS can work very well with just the object store. [[Filer]] can then be a
|
|||||||
* Adding/Removing servers does **not** cause any data re-balancing.
|
* Adding/Removing servers does **not** cause any data re-balancing.
|
||||||
* Optionally fix the orientation for jpeg pictures.
|
* Optionally fix the orientation for jpeg pictures.
|
||||||
* Support ETag, Accept-Range, Last-Modified, etc.
|
* Support ETag, Accept-Range, Last-Modified, etc.
|
||||||
* Support in-memory/leveldb/boltdb/btree mode tuning for memory/performance balance.
|
* Support in-memory/leveldb/readonly mode tuning for memory/performance balance.
|
||||||
* Support rebalancing the writable and readonly volumes.
|
* Support rebalancing the writable and readonly volumes.
|
||||||
|
* [Transparent cloud integration][CloudTier]: unlimited capacity via tiered cloud storage for warm data.
|
||||||
|
* [Erasure Coding for warm storage][ErasureCoding] Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.
|
||||||
|
|
||||||
[Back to TOC](#table-of-contents)
|
[Back to TOC](#table-of-contents)
|
||||||
|
|
||||||
@ -113,7 +113,6 @@ SeaweedFS can work very well with just the object store. [[Filer]] can then be a
|
|||||||
* [filer server][Filer] provide "normal" directories and files via http.
|
* [filer server][Filer] provide "normal" directories and files via http.
|
||||||
* [mount filer][Mount] to read and write files directly as a local directory via FUSE.
|
* [mount filer][Mount] to read and write files directly as a local directory via FUSE.
|
||||||
* [Amazon S3 compatible API][AmazonS3API] to access files with S3 tooling.
|
* [Amazon S3 compatible API][AmazonS3API] to access files with S3 tooling.
|
||||||
* [Erasure Coding for warm storage][ErasureCoding] Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.
|
|
||||||
* [Hadoop Compatible File System][Hadoop] to access files from Hadoop/Spark/Flink/etc jobs.
|
* [Hadoop Compatible File System][Hadoop] to access files from Hadoop/Spark/Flink/etc jobs.
|
||||||
* [Async Backup To Cloud][BackupToCloud] has extremely fast local access and backups to Amazon S3, Google Cloud Storage, Azure, BackBlaze.
|
* [Async Backup To Cloud][BackupToCloud] has extremely fast local access and backups to Amazon S3, Google Cloud Storage, Azure, BackBlaze.
|
||||||
* [WebDAV] access as a mapped drive on Mac and Windows, or from mobile devices.
|
* [WebDAV] access as a mapped drive on Mac and Windows, or from mobile devices.
|
||||||
@ -125,6 +124,7 @@ SeaweedFS can work very well with just the object store. [[Filer]] can then be a
|
|||||||
[Hadoop]: https://github.com/chrislusf/seaweedfs/wiki/Hadoop-Compatible-File-System
|
[Hadoop]: https://github.com/chrislusf/seaweedfs/wiki/Hadoop-Compatible-File-System
|
||||||
[WebDAV]: https://github.com/chrislusf/seaweedfs/wiki/WebDAV
|
[WebDAV]: https://github.com/chrislusf/seaweedfs/wiki/WebDAV
|
||||||
[ErasureCoding]: https://github.com/chrislusf/seaweedfs/wiki/Erasure-coding-for-warm-storage
|
[ErasureCoding]: https://github.com/chrislusf/seaweedfs/wiki/Erasure-coding-for-warm-storage
|
||||||
|
[CloudTier]: https://github.com/chrislusf/seaweedfs/wiki/Cloud-Tier
|
||||||
|
|
||||||
[Back to TOC](#table-of-contents)
|
[Back to TOC](#table-of-contents)
|
||||||
|
|
||||||
@ -318,6 +318,16 @@ Each individual file size is limited to the volume size.
|
|||||||
|
|
||||||
All file meta information stored on an volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.
|
All file meta information stored on an volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.
|
||||||
|
|
||||||
|
### Tiered Storage to the cloud ###
|
||||||
|
|
||||||
|
The local volume servers are much faster, while cloud storages have elastic capacity and are actually more cost-efficient if not accessed often (usually free to upload, but relatively costly to access). With the append-only structure and O(1) access time, SeaweedFS can take advantage of both local and cloud storage by offloading the warm data to the cloud.
|
||||||
|
|
||||||
|
Usually hot data are fresh and warm data are old. SeaweedFS puts the newly created volumes on local servers, and optionally upload the older volumes on the cloud. If the older data are accessed less often, this literally gives you unlimited capacity with limited local servers, and still fast for new data.
|
||||||
|
|
||||||
|
With the O(1) access time, the network latency cost is kept at minimum.
|
||||||
|
|
||||||
|
If the hot~warm data is split as 20~80, with 20 servers, you can achieve storage capacity of 100 servers. That's a cost saving of 80%! Or you can repurpose the 80 servers to store new data also, and get 5X storage throughput.
|
||||||
|
|
||||||
[Back to TOC](#table-of-contents)
|
[Back to TOC](#table-of-contents)
|
||||||
|
|
||||||
## Compared to Other File Systems ##
|
## Compared to Other File Systems ##
|
||||||
@ -344,7 +354,7 @@ The architectures are mostly the same. SeaweedFS aims to store and read files fa
|
|||||||
|
|
||||||
* SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files.
|
* SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files.
|
||||||
* SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached.
|
* SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached.
|
||||||
* SeaweedFS Filer metadata store can be any well-known and proven data stores, e.g., Cassandra, Redis, Etcd, MySql, Postgres, etc, and is easy to customized.
|
* SeaweedFS Filer metadata store can be any well-known and proven data stores, e.g., Cassandra, Redis, Etcd, MySql, Postgres, MemSql, TiDB, CockroachDB, etc, and is easy to customized.
|
||||||
* SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc.
|
* SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc.
|
||||||
|
|
||||||
| System | File Meta | File Content Read| POSIX | REST API | Optimized for small files |
|
| System | File Meta | File Content Read| POSIX | REST API | Optimized for small files |
|
||||||
@ -376,7 +386,7 @@ Ceph uses CRUSH hashing to automatically manage the data placement. SeaweedFS pl
|
|||||||
|
|
||||||
SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.
|
SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.
|
||||||
|
|
||||||
SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Redis, Etcd, Cassandra, to manage file directories. There are proven, scalable, and easier to manage.
|
SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Redis, Etcd, Cassandra, MemSql, TiDB, CockroachCB, to manage file directories. There are proven, scalable, and easier to manage.
|
||||||
|
|
||||||
| SeaweedFS | comparable to Ceph | advantage |
|
| SeaweedFS | comparable to Ceph | advantage |
|
||||||
| ------------- | ------------- | ---------------- |
|
| ------------- | ------------- | ---------------- |
|
||||||
@ -513,6 +523,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|||||||
See the License for the specific language governing permissions and
|
See the License for the specific language governing permissions and
|
||||||
limitations under the License.
|
limitations under the License.
|
||||||
|
|
||||||
|
The text of this page is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
|
||||||
|
|
||||||
[Back to TOC](#table-of-contents)
|
[Back to TOC](#table-of-contents)
|
||||||
|
|
||||||
## Stargazers over time ##
|
## Stargazers over time ##
|
||||||
|
@ -1,5 +1,15 @@
|
|||||||
FROM golang:latest
|
FROM frolvlad/alpine-glibc as builder
|
||||||
RUN go get github.com/chrislusf/seaweedfs/weed
|
RUN apk add git go g++
|
||||||
|
RUN mkdir -p /go/src/github.com/chrislusf/
|
||||||
|
RUN git clone https://github.com/chrislusf/seaweedfs /go/src/github.com/chrislusf/seaweedfs
|
||||||
|
RUN cd /go/src/github.com/chrislusf/seaweedfs/weed && go install
|
||||||
|
|
||||||
|
FROM alpine AS final
|
||||||
|
LABEL author="Chris Lu"
|
||||||
|
COPY --from=builder /root/go/bin/weed /usr/bin/
|
||||||
|
RUN mkdir -p /etc/seaweedfs
|
||||||
|
COPY --from=builder /go/src/github.com/chrislusf/seaweedfs/docker/filer.toml /etc/seaweedfs/filer.toml
|
||||||
|
COPY --from=builder /go/src/github.com/chrislusf/seaweedfs/docker/entrypoint.sh /entrypoint.sh
|
||||||
|
|
||||||
# volume server gprc port
|
# volume server gprc port
|
||||||
EXPOSE 18080
|
EXPOSE 18080
|
||||||
@ -20,10 +30,6 @@ RUN mkdir -p /data/filerldb2
|
|||||||
|
|
||||||
VOLUME /data
|
VOLUME /data
|
||||||
|
|
||||||
RUN mkdir -p /etc/seaweedfs
|
|
||||||
RUN cp /go/src/github.com/chrislusf/seaweedfs/docker/filer.toml /etc/seaweedfs/filer.toml
|
|
||||||
RUN cp /go/src/github.com/chrislusf/seaweedfs/docker/entrypoint.sh /entrypoint.sh
|
|
||||||
RUN chmod +x /entrypoint.sh
|
RUN chmod +x /entrypoint.sh
|
||||||
RUN cp /go/bin/weed /usr/bin/
|
|
||||||
|
|
||||||
ENTRYPOINT ["/entrypoint.sh"]
|
ENTRYPOINT ["/entrypoint.sh"]
|
||||||
|
@ -11,11 +11,21 @@ docker-compose -f seaweedfs-compose.yml -p seaweedfs up
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Development
|
## Try latest tip
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
wget https://raw.githubusercontent.com/chrislusf/seaweedfs/master/docker/seaweedfs-dev-compose.yml
|
||||||
|
|
||||||
|
docker-compose -f seaweedfs-dev-compose.yml -p seaweedfs up
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Local Development
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd $GOPATH/src/github.com/chrislusf/seaweedfs/docker
|
cd $GOPATH/src/github.com/chrislusf/seaweedfs/docker
|
||||||
|
|
||||||
docker-compose -f dev-compose.yml -p seaweedfs up
|
docker-compose -f local-dev-compose.yml -p seaweedfs up
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
case "$1" in
|
case "$1" in
|
||||||
|
|
||||||
'master')
|
'master')
|
||||||
ARGS="-ip `hostname -i` -mdir /data"
|
ARGS="-mdir /data"
|
||||||
# Is this instance linked with an other master? (Docker commandline "--link master1:master")
|
# Is this instance linked with an other master? (Docker commandline "--link master1:master")
|
||||||
if [ -n "$MASTER_PORT_9333_TCP_ADDR" ] ; then
|
if [ -n "$MASTER_PORT_9333_TCP_ADDR" ] ; then
|
||||||
ARGS="$ARGS -peers=$MASTER_PORT_9333_TCP_ADDR:$MASTER_PORT_9333_TCP_PORT"
|
ARGS="$ARGS -peers=$MASTER_PORT_9333_TCP_ADDR:$MASTER_PORT_9333_TCP_PORT"
|
||||||
|
@ -8,7 +8,7 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- 9333:9333
|
- 9333:9333
|
||||||
- 19333:19333
|
- 19333:19333
|
||||||
command: "master"
|
command: "master -ip=master"
|
||||||
volume:
|
volume:
|
||||||
build:
|
build:
|
||||||
context: .
|
context: .
|
||||||
@ -16,7 +16,7 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- 8080:8080
|
- 8080:8080
|
||||||
- 18080:18080
|
- 18080:18080
|
||||||
command: 'volume -max=5 -mserver="master:9333" -port=8080'
|
command: '-v=2 volume -max=5 -mserver="master:9333" -port=8080 -ip=volume'
|
||||||
depends_on:
|
depends_on:
|
||||||
- master
|
- master
|
||||||
filer:
|
filer:
|
||||||
@ -26,7 +26,7 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- 8888:8888
|
- 8888:8888
|
||||||
- 18888:18888
|
- 18888:18888
|
||||||
command: 'filer -master="master:9333"'
|
command: '-v=4 filer -master="master:9333"'
|
||||||
depends_on:
|
depends_on:
|
||||||
- master
|
- master
|
||||||
- volume
|
- volume
|
||||||
@ -36,7 +36,7 @@ services:
|
|||||||
dockerfile: Dockerfile.go_build
|
dockerfile: Dockerfile.go_build
|
||||||
ports:
|
ports:
|
||||||
- 8333:8333
|
- 8333:8333
|
||||||
command: 's3 -filer="filer:8888"'
|
command: '-v=4 s3 -filer="filer:8888"'
|
||||||
depends_on:
|
depends_on:
|
||||||
- master
|
- master
|
||||||
- volume
|
- volume
|
@ -6,7 +6,7 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- 9333:9333
|
- 9333:9333
|
||||||
- 19333:19333
|
- 19333:19333
|
||||||
command: "master"
|
command: "master -ip=master"
|
||||||
volume:
|
volume:
|
||||||
image: chrislusf/seaweedfs # use a remote image
|
image: chrislusf/seaweedfs # use a remote image
|
||||||
ports:
|
ports:
|
||||||
|
35
docker/seaweedfs-dev-compose.yml
Normal file
35
docker/seaweedfs-dev-compose.yml
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
version: '2'
|
||||||
|
|
||||||
|
services:
|
||||||
|
master:
|
||||||
|
image: chrislusf/seaweedfs:dev # use a remote dev image
|
||||||
|
ports:
|
||||||
|
- 9333:9333
|
||||||
|
- 19333:19333
|
||||||
|
command: "master -ip=master"
|
||||||
|
volume:
|
||||||
|
image: chrislusf/seaweedfs:dev # use a remote dev image
|
||||||
|
ports:
|
||||||
|
- 8080:8080
|
||||||
|
- 18080:18080
|
||||||
|
command: '-v=2 volume -max=5 -mserver="master:9333" -port=8080 -ip=volume'
|
||||||
|
depends_on:
|
||||||
|
- master
|
||||||
|
filer:
|
||||||
|
image: chrislusf/seaweedfs:dev # use a remote dev image
|
||||||
|
ports:
|
||||||
|
- 8888:8888
|
||||||
|
- 18888:18888
|
||||||
|
command: '-v=4 filer -master="master:9333"'
|
||||||
|
depends_on:
|
||||||
|
- master
|
||||||
|
- volume
|
||||||
|
s3:
|
||||||
|
image: chrislusf/seaweedfs:dev # use a remote dev image
|
||||||
|
ports:
|
||||||
|
- 8333:8333
|
||||||
|
command: '-v=4 s3 -filer="filer:8888"'
|
||||||
|
depends_on:
|
||||||
|
- master
|
||||||
|
- volume
|
||||||
|
- filer
|
39
go.mod
39
go.mod
@ -4,21 +4,10 @@ go 1.12
|
|||||||
|
|
||||||
require (
|
require (
|
||||||
cloud.google.com/go v0.44.3
|
cloud.google.com/go v0.44.3
|
||||||
contrib.go.opencensus.io/exporter/aws v0.0.0-20190807220307-c50fb1bd7f21 // indirect
|
|
||||||
contrib.go.opencensus.io/exporter/ocagent v0.6.0 // indirect
|
|
||||||
contrib.go.opencensus.io/exporter/stackdriver v0.12.5 // indirect
|
|
||||||
contrib.go.opencensus.io/resource v0.1.2 // indirect
|
|
||||||
github.com/Azure/azure-amqp-common-go v1.1.4 // indirect
|
|
||||||
github.com/Azure/azure-pipeline-go v0.2.2 // indirect
|
github.com/Azure/azure-pipeline-go v0.2.2 // indirect
|
||||||
github.com/Azure/azure-sdk-for-go v33.0.0+incompatible // indirect
|
|
||||||
github.com/Azure/azure-storage-blob-go v0.8.0
|
github.com/Azure/azure-storage-blob-go v0.8.0
|
||||||
github.com/Azure/go-autorest v13.0.0+incompatible // indirect
|
|
||||||
github.com/Azure/go-autorest/tracing v0.5.0 // indirect
|
|
||||||
github.com/DataDog/zstd v1.4.1 // indirect
|
github.com/DataDog/zstd v1.4.1 // indirect
|
||||||
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190828224159-d93c53a4824c // indirect
|
|
||||||
github.com/Shopify/sarama v1.23.1
|
github.com/Shopify/sarama v1.23.1
|
||||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 // indirect
|
|
||||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4 // indirect
|
|
||||||
github.com/aws/aws-sdk-go v1.23.13
|
github.com/aws/aws-sdk-go v1.23.13
|
||||||
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92
|
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92
|
||||||
github.com/coreos/etcd v3.3.15+incompatible // indirect
|
github.com/coreos/etcd v3.3.15+incompatible // indirect
|
||||||
@ -28,37 +17,34 @@ require (
|
|||||||
github.com/disintegration/imaging v1.6.1
|
github.com/disintegration/imaging v1.6.1
|
||||||
github.com/dustin/go-humanize v1.0.0
|
github.com/dustin/go-humanize v1.0.0
|
||||||
github.com/eapache/go-resiliency v1.2.0 // indirect
|
github.com/eapache/go-resiliency v1.2.0 // indirect
|
||||||
github.com/gabriel-vasile/mimetype v0.3.17
|
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a
|
||||||
github.com/go-kit/kit v0.9.0 // indirect
|
github.com/facebookgo/stats v0.0.0-20151006221625-1b76add642e4
|
||||||
|
github.com/frankban/quicktest v1.7.2 // indirect
|
||||||
|
github.com/gabriel-vasile/mimetype v1.0.0
|
||||||
github.com/go-redis/redis v6.15.2+incompatible
|
github.com/go-redis/redis v6.15.2+incompatible
|
||||||
github.com/go-sql-driver/mysql v1.4.1
|
github.com/go-sql-driver/mysql v1.4.1
|
||||||
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6
|
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6
|
||||||
github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48 // indirect
|
github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48 // indirect
|
||||||
github.com/golang/protobuf v1.3.2
|
github.com/golang/protobuf v1.3.2
|
||||||
github.com/google/btree v1.0.0
|
github.com/google/btree v1.0.0
|
||||||
github.com/google/pprof v0.0.0-20190723021845-34ac40c74b70 // indirect
|
github.com/google/uuid v1.1.1
|
||||||
github.com/gorilla/mux v1.7.3
|
github.com/gorilla/mux v1.7.3
|
||||||
github.com/gorilla/websocket v1.4.1 // indirect
|
github.com/gorilla/websocket v1.4.1 // indirect
|
||||||
github.com/grpc-ecosystem/grpc-gateway v1.11.0 // indirect
|
github.com/grpc-ecosystem/grpc-gateway v1.11.0 // indirect
|
||||||
|
github.com/hashicorp/golang-lru v0.5.3 // indirect
|
||||||
github.com/jacobsa/daemonize v0.0.0-20160101105449-e460293e890f
|
github.com/jacobsa/daemonize v0.0.0-20160101105449-e460293e890f
|
||||||
github.com/jcmturner/gofork v1.0.0 // indirect
|
github.com/jcmturner/gofork v1.0.0 // indirect
|
||||||
github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9 // indirect
|
|
||||||
github.com/karlseguin/ccache v2.0.3+incompatible
|
github.com/karlseguin/ccache v2.0.3+incompatible
|
||||||
github.com/karlseguin/expect v1.0.1 // indirect
|
github.com/karlseguin/expect v1.0.1 // indirect
|
||||||
github.com/klauspost/cpuid v1.2.1 // indirect
|
github.com/klauspost/cpuid v1.2.1 // indirect
|
||||||
github.com/klauspost/crc32 v1.2.0
|
github.com/klauspost/crc32 v1.2.0
|
||||||
github.com/klauspost/reedsolomon v1.9.2
|
github.com/klauspost/reedsolomon v1.9.2
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
|
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
|
||||||
github.com/kr/pty v1.1.8 // indirect
|
|
||||||
github.com/kurin/blazer v0.5.3
|
github.com/kurin/blazer v0.5.3
|
||||||
github.com/lib/pq v1.2.0
|
github.com/lib/pq v1.2.0
|
||||||
github.com/magiconair/properties v1.8.1 // indirect
|
github.com/magiconair/properties v1.8.1 // indirect
|
||||||
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb // indirect
|
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb // indirect
|
||||||
github.com/mattn/go-isatty v0.0.9 // indirect
|
|
||||||
github.com/mattn/go-runewidth v0.0.4 // indirect
|
github.com/mattn/go-runewidth v0.0.4 // indirect
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
|
|
||||||
github.com/nats-io/gnatsd v1.4.1 // indirect
|
|
||||||
github.com/nats-io/go-nats v1.7.2 // indirect
|
|
||||||
github.com/nats-io/nats-server/v2 v2.0.4 // indirect
|
github.com/nats-io/nats-server/v2 v2.0.4 // indirect
|
||||||
github.com/onsi/ginkgo v1.10.1 // indirect
|
github.com/onsi/ginkgo v1.10.1 // indirect
|
||||||
github.com/onsi/gomega v1.7.0 // indirect
|
github.com/onsi/gomega v1.7.0 // indirect
|
||||||
@ -75,10 +61,7 @@ require (
|
|||||||
github.com/rakyll/statik v0.1.6
|
github.com/rakyll/statik v0.1.6
|
||||||
github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563 // indirect
|
github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563 // indirect
|
||||||
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 // indirect
|
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 // indirect
|
||||||
github.com/rogpeppe/fastuuid v1.2.0 // indirect
|
|
||||||
github.com/rogpeppe/go-internal v1.3.1 // indirect
|
|
||||||
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd
|
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd
|
||||||
github.com/satori/go.uuid v1.2.0
|
|
||||||
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff
|
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff
|
||||||
github.com/sirupsen/logrus v1.4.2 // indirect
|
github.com/sirupsen/logrus v1.4.2 // indirect
|
||||||
github.com/spaolacci/murmur3 v1.1.0 // indirect
|
github.com/spaolacci/murmur3 v1.1.0 // indirect
|
||||||
@ -86,26 +69,21 @@ require (
|
|||||||
github.com/spf13/jwalterweatherman v1.1.0 // indirect
|
github.com/spf13/jwalterweatherman v1.1.0 // indirect
|
||||||
github.com/spf13/viper v1.4.0
|
github.com/spf13/viper v1.4.0
|
||||||
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271 // indirect
|
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271 // indirect
|
||||||
github.com/stretchr/testify v1.4.0 // indirect
|
github.com/stretchr/testify v1.4.0
|
||||||
github.com/syndtr/goleveldb v1.0.0
|
github.com/syndtr/goleveldb v1.0.0
|
||||||
github.com/tidwall/gjson v1.3.2
|
github.com/tidwall/gjson v1.3.2
|
||||||
github.com/tidwall/match v1.0.1
|
github.com/tidwall/match v1.0.1
|
||||||
github.com/twinj/uuid v1.0.0 // indirect
|
|
||||||
github.com/uber-go/atomic v1.4.0 // indirect
|
github.com/uber-go/atomic v1.4.0 // indirect
|
||||||
github.com/uber/jaeger-client-go v2.17.0+incompatible // indirect
|
github.com/uber/jaeger-client-go v2.17.0+incompatible // indirect
|
||||||
github.com/uber/jaeger-lib v2.0.0+incompatible // indirect
|
github.com/uber/jaeger-lib v2.0.0+incompatible // indirect
|
||||||
github.com/ugorji/go v1.1.7 // indirect
|
|
||||||
github.com/willf/bitset v1.1.10 // indirect
|
github.com/willf/bitset v1.1.10 // indirect
|
||||||
github.com/willf/bloom v2.0.3+incompatible
|
github.com/willf/bloom v2.0.3+incompatible
|
||||||
github.com/wsxiaoys/terminal v0.0.0-20160513160801-0940f3fc43a0 // indirect
|
github.com/wsxiaoys/terminal v0.0.0-20160513160801-0940f3fc43a0 // indirect
|
||||||
go.etcd.io/etcd v3.3.15+incompatible
|
go.etcd.io/etcd v3.3.15+incompatible
|
||||||
go.mongodb.org/mongo-driver v1.1.0 // indirect
|
|
||||||
gocloud.dev v0.16.0
|
gocloud.dev v0.16.0
|
||||||
gocloud.dev/pubsub/natspubsub v0.16.0
|
gocloud.dev/pubsub/natspubsub v0.16.0
|
||||||
gocloud.dev/pubsub/rabbitpubsub v0.16.0
|
gocloud.dev/pubsub/rabbitpubsub v0.16.0
|
||||||
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979 // indirect
|
|
||||||
golang.org/x/image v0.0.0-20190829233526-b3c06291d021 // indirect
|
golang.org/x/image v0.0.0-20190829233526-b3c06291d021 // indirect
|
||||||
golang.org/x/mobile v0.0.0-20190830201351-c6da95954960 // indirect
|
|
||||||
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b
|
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b
|
||||||
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b
|
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b
|
||||||
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110
|
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110
|
||||||
@ -115,8 +93,7 @@ require (
|
|||||||
gopkg.in/jcmturner/goidentity.v3 v3.0.0 // indirect
|
gopkg.in/jcmturner/goidentity.v3 v3.0.0 // indirect
|
||||||
gopkg.in/jcmturner/gokrb5.v7 v7.3.0 // indirect
|
gopkg.in/jcmturner/gokrb5.v7 v7.3.0 // indirect
|
||||||
gopkg.in/karlseguin/expect.v1 v1.0.1 // indirect
|
gopkg.in/karlseguin/expect.v1 v1.0.1 // indirect
|
||||||
honnef.co/go/tools v0.0.1-2019.2.2 // indirect
|
sigs.k8s.io/yaml v1.1.0 // indirect
|
||||||
pack.ag/amqp v0.12.1 // indirect
|
|
||||||
)
|
)
|
||||||
|
|
||||||
replace github.com/satori/go.uuid v1.2.0 => github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b
|
replace github.com/satori/go.uuid v1.2.0 => github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b
|
||||||
|
221
go.sum
221
go.sum
@ -1,29 +1,17 @@
|
|||||||
bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
|
|
||||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||||
cloud.google.com/go v0.37.4/go.mod h1:NHPJ89PdicEuT9hdPXMROBD91xc5uRDxsMtSB16k7hw=
|
|
||||||
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
||||||
cloud.google.com/go v0.39.0/go.mod h1:rVLT6fkc8chs9sfPtFc1SBH6em7n+ZoXaG+87tDISts=
|
cloud.google.com/go v0.39.0/go.mod h1:rVLT6fkc8chs9sfPtFc1SBH6em7n+ZoXaG+87tDISts=
|
||||||
cloud.google.com/go v0.43.0 h1:banaiRPAM8kUVYneOSkhgcDsLzEvL25FinuiSZaH/2w=
|
|
||||||
cloud.google.com/go v0.43.0/go.mod h1:BOSR3VbTLkk6FDC/TcffxP4NF/FFBGA5ku+jvKOP7pg=
|
|
||||||
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
|
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
|
||||||
cloud.google.com/go v0.44.3 h1:0sMegbmn/8uTwpNkB0q9cLEpZ2W5a6kl+wtBQgPWBJQ=
|
cloud.google.com/go v0.44.3 h1:0sMegbmn/8uTwpNkB0q9cLEpZ2W5a6kl+wtBQgPWBJQ=
|
||||||
cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
|
cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
|
||||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||||
contrib.go.opencensus.io/exporter/aws v0.0.0-20181029163544-2befc13012d0/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA=
|
contrib.go.opencensus.io/exporter/aws v0.0.0-20181029163544-2befc13012d0/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA=
|
||||||
contrib.go.opencensus.io/exporter/aws v0.0.0-20190807220307-c50fb1bd7f21/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA=
|
contrib.go.opencensus.io/exporter/ocagent v0.5.0 h1:TKXjQSRS0/cCDrP7KvkgU6SmILtF/yV2TOs/02K/WZQ=
|
||||||
contrib.go.opencensus.io/exporter/ocagent v0.4.12/go.mod h1:450APlNTSR6FrvC3CTRqYosuDstRB9un7SOx2k/9ckA=
|
|
||||||
contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0=
|
contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0=
|
||||||
contrib.go.opencensus.io/exporter/ocagent v0.6.0/go.mod h1:zmKjrJcdo0aYcVS7bmEeSEBLPA9YJp5bjrofdU3pIXs=
|
|
||||||
contrib.go.opencensus.io/exporter/stackdriver v0.11.0/go.mod h1:hA7rlmtavV03FGxzWXAPBUnZeZBhWN/QYQAuMtxc9Bk=
|
|
||||||
contrib.go.opencensus.io/exporter/stackdriver v0.12.1/go.mod h1:iwB6wGarfphGGe/e5CWqyUk/cLzKnWsOKPVW3no6OTw=
|
contrib.go.opencensus.io/exporter/stackdriver v0.12.1/go.mod h1:iwB6wGarfphGGe/e5CWqyUk/cLzKnWsOKPVW3no6OTw=
|
||||||
contrib.go.opencensus.io/exporter/stackdriver v0.12.5/go.mod h1:8x999/OcIPy5ivx/wDiV7Gx4D+VUPODf0mWRGRc5kSk=
|
|
||||||
contrib.go.opencensus.io/integrations/ocsql v0.1.4/go.mod h1:8DsSdjz3F+APR+0z0WkU1aRorQCFfRxvqjUUPMbF3fE=
|
contrib.go.opencensus.io/integrations/ocsql v0.1.4/go.mod h1:8DsSdjz3F+APR+0z0WkU1aRorQCFfRxvqjUUPMbF3fE=
|
||||||
contrib.go.opencensus.io/resource v0.0.0-20190131005048-21591786a5e0/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
|
|
||||||
contrib.go.opencensus.io/resource v0.1.1/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
|
contrib.go.opencensus.io/resource v0.1.1/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
|
||||||
contrib.go.opencensus.io/resource v0.1.2/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
|
|
||||||
github.com/Azure/azure-amqp-common-go v1.1.3/go.mod h1:FhZtXirFANw40UXI2ntweO+VOkfaw8s6vZxUiRhLYW8=
|
|
||||||
github.com/Azure/azure-amqp-common-go v1.1.4/go.mod h1:FhZtXirFANw40UXI2ntweO+VOkfaw8s6vZxUiRhLYW8=
|
|
||||||
github.com/Azure/azure-amqp-common-go/v2 v2.1.0/go.mod h1:R8rea+gJRuJR6QxTir/XuEd+YuKoUiazDC/N96FiDEU=
|
github.com/Azure/azure-amqp-common-go/v2 v2.1.0/go.mod h1:R8rea+gJRuJR6QxTir/XuEd+YuKoUiazDC/N96FiDEU=
|
||||||
github.com/Azure/azure-pipeline-go v0.1.8/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg=
|
github.com/Azure/azure-pipeline-go v0.1.8/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg=
|
||||||
github.com/Azure/azure-pipeline-go v0.1.9/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg=
|
github.com/Azure/azure-pipeline-go v0.1.9/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg=
|
||||||
@ -31,26 +19,14 @@ github.com/Azure/azure-pipeline-go v0.2.1 h1:OLBdZJ3yvOn2MezlWvbrBMTEUQC72zAftRZ
|
|||||||
github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4=
|
github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4=
|
||||||
github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY=
|
github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY=
|
||||||
github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc=
|
github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc=
|
||||||
github.com/Azure/azure-sdk-for-go v21.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
|
||||||
github.com/Azure/azure-sdk-for-go v27.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
|
||||||
github.com/Azure/azure-sdk-for-go v29.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
github.com/Azure/azure-sdk-for-go v29.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||||
github.com/Azure/azure-sdk-for-go v30.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
github.com/Azure/azure-sdk-for-go v30.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||||
github.com/Azure/azure-sdk-for-go v33.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
|
||||||
github.com/Azure/azure-service-bus-go v0.4.1/go.mod h1:d9ho9e/06euiTwGpKxmlbpPhFUsfCsq6a4tZ68r51qI=
|
|
||||||
github.com/Azure/azure-service-bus-go v0.9.1/go.mod h1:yzBx6/BUGfjfeqbRZny9AQIbIe3AcV9WZbAdpkoXOa0=
|
github.com/Azure/azure-service-bus-go v0.9.1/go.mod h1:yzBx6/BUGfjfeqbRZny9AQIbIe3AcV9WZbAdpkoXOa0=
|
||||||
github.com/Azure/azure-storage-blob-go v0.6.0/go.mod h1:oGfmITT1V6x//CswqY2gtAHND+xIP64/qL7a5QJix0Y=
|
github.com/Azure/azure-storage-blob-go v0.6.0/go.mod h1:oGfmITT1V6x//CswqY2gtAHND+xIP64/qL7a5QJix0Y=
|
||||||
github.com/Azure/azure-storage-blob-go v0.7.0 h1:MuueVOYkufCxJw5YZzF842DY2MBsp+hLuh2apKY0mck=
|
|
||||||
github.com/Azure/azure-storage-blob-go v0.7.0/go.mod h1:f9YQKtsG1nMisotuTPpO0tjNuEjKRYAcJU8/ydDI++4=
|
|
||||||
github.com/Azure/azure-storage-blob-go v0.8.0 h1:53qhf0Oxa0nOjgbDeeYPUeyiNmafAFEY95rZLK0Tj6o=
|
github.com/Azure/azure-storage-blob-go v0.8.0 h1:53qhf0Oxa0nOjgbDeeYPUeyiNmafAFEY95rZLK0Tj6o=
|
||||||
github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0=
|
github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0=
|
||||||
github.com/Azure/go-autorest v11.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
github.com/Azure/go-autorest v12.0.0+incompatible h1:N+VqClcomLGD/sHb3smbSYYtNMgKpVV3Cd5r5i8z6bQ=
|
||||||
github.com/Azure/go-autorest v11.1.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
|
||||||
github.com/Azure/go-autorest v11.1.2+incompatible h1:viZ3tV5l4gE2Sw0xrasFHytCGtzYCrT+um/rrSQ1BfA=
|
|
||||||
github.com/Azure/go-autorest v11.1.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
|
||||||
github.com/Azure/go-autorest v12.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
github.com/Azure/go-autorest v12.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||||
github.com/Azure/go-autorest v13.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
|
||||||
github.com/Azure/go-autorest/tracing v0.1.0/go.mod h1:ROEEAFwXycQw7Sn3DXNtEedEvdeRAgDr0izn4z5Ij88=
|
|
||||||
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
|
|
||||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||||
@ -58,12 +34,8 @@ github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798 h1:2T/jmrHeTezcCM58
|
|||||||
github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
|
github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
|
||||||
github.com/DataDog/zstd v1.4.1 h1:3oxKN3wbHibqx897utPC2LTQU4J+IHWWJO+glkAkpFM=
|
github.com/DataDog/zstd v1.4.1 h1:3oxKN3wbHibqx897utPC2LTQU4J+IHWWJO+glkAkpFM=
|
||||||
github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
|
github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
|
||||||
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190418212003-6ac0b49e7197/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
|
|
||||||
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190605020000-c4ba1fdf4d36/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
|
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190605020000-c4ba1fdf4d36/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
|
||||||
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190828224159-d93c53a4824c/go.mod h1:mjwGPas4yKduTyubHvD1Atl9r1rUq8DfVy+gkVvZ+oo=
|
|
||||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||||
github.com/OneOfOne/xxhash v1.2.5/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
|
|
||||||
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
|
|
||||||
github.com/Shopify/sarama v1.23.1 h1:XxJBCZEoWJtoWjf/xRbmGUpAmTZGnuuF0ON0EvxxBrs=
|
github.com/Shopify/sarama v1.23.1 h1:XxJBCZEoWJtoWjf/xRbmGUpAmTZGnuuF0ON0EvxxBrs=
|
||||||
github.com/Shopify/sarama v1.23.1/go.mod h1:XLH1GYJnLVE0XCr6KdJGVJRTwY30moWNJ4sERjXX6fs=
|
github.com/Shopify/sarama v1.23.1/go.mod h1:XLH1GYJnLVE0XCr6KdJGVJRTwY30moWNJ4sERjXX6fs=
|
||||||
github.com/Shopify/toxiproxy v2.1.4+incompatible h1:TKdv8HiTLgE5wdJuEML90aBgNWsokNbMijUGhmcoBJc=
|
github.com/Shopify/toxiproxy v2.1.4+incompatible h1:TKdv8HiTLgE5wdJuEML90aBgNWsokNbMijUGhmcoBJc=
|
||||||
@ -71,19 +43,11 @@ github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMx
|
|||||||
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f h1:5ZfJxyXo8KyX8DgGXC5B7ILL8y51fci/qYz2B4j8iLY=
|
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f h1:5ZfJxyXo8KyX8DgGXC5B7ILL8y51fci/qYz2B4j8iLY=
|
||||||
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
|
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
|
||||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
|
||||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
|
||||||
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
|
||||||
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
|
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
|
||||||
github.com/aws/aws-sdk-go v1.15.27/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
|
github.com/aws/aws-sdk-go v1.15.27/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
|
||||||
github.com/aws/aws-sdk-go v1.18.6/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
|
||||||
github.com/aws/aws-sdk-go v1.19.16/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
|
||||||
github.com/aws/aws-sdk-go v1.19.18/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
github.com/aws/aws-sdk-go v1.19.18/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||||
github.com/aws/aws-sdk-go v1.19.45/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
github.com/aws/aws-sdk-go v1.19.45/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||||
github.com/aws/aws-sdk-go v1.21.4 h1:1xB+x6Dzev8ETmeHEiSfUVbIzmC/0EyFfXMkJpzKPCE=
|
|
||||||
github.com/aws/aws-sdk-go v1.21.4/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
|
||||||
github.com/aws/aws-sdk-go v1.22.1/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
|
||||||
github.com/aws/aws-sdk-go v1.23.13 h1:l/NG+mgQFRGG3dsFzEj0jw9JIs/zYdtU6MXhY1WIDmM=
|
github.com/aws/aws-sdk-go v1.23.13 h1:l/NG+mgQFRGG3dsFzEj0jw9JIs/zYdtU6MXhY1WIDmM=
|
||||||
github.com/aws/aws-sdk-go v1.23.13/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
github.com/aws/aws-sdk-go v1.23.13/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||||
@ -97,20 +61,20 @@ github.com/bitly/go-hostpool v0.0.0-20171023180738-a3a6125de932/go.mod h1:NOuUCS
|
|||||||
github.com/blacktear23/go-proxyprotocol v0.0.0-20180807104634-af7a81e8dd0d/go.mod h1:VKt7CNAQxpFpSDz3sXyj9hY/GbVsQCr0sB3w59nE7lU=
|
github.com/blacktear23/go-proxyprotocol v0.0.0-20180807104634-af7a81e8dd0d/go.mod h1:VKt7CNAQxpFpSDz3sXyj9hY/GbVsQCr0sB3w59nE7lU=
|
||||||
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
|
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
|
||||||
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
|
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
|
||||||
|
github.com/census-instrumentation/opencensus-proto v0.2.0 h1:LzQXZOgg4CQfE6bFvXGM30YZL1WW/M337pXml+GrcZ4=
|
||||||
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
|
||||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||||
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92 h1:lM9SFsh0EPXkyJyrTJqLZPAIJBtNFP6LNkYXu2MnSZI=
|
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92 h1:lM9SFsh0EPXkyJyrTJqLZPAIJBtNFP6LNkYXu2MnSZI=
|
||||||
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92/go.mod h1:4jyiUCD5y548+yKW+oiHtccBiMaLCCbFBpK2t7X4eUo=
|
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92/go.mod h1:4jyiUCD5y548+yKW+oiHtccBiMaLCCbFBpK2t7X4eUo=
|
||||||
github.com/chrislusf/seaweedfs v0.0.0-20190912032620-ae53f636804e h1:PmqW1XGq0V6KnwOFa3hOSqsqa/bH66zxWzCVMOo5Yi4=
|
|
||||||
github.com/chrislusf/seaweedfs v0.0.0-20190912032620-ae53f636804e/go.mod h1:e5Pz27e2DxLCFt6GbCBP5/qJygD4TkOL5xqSFYFq+2U=
|
|
||||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||||
github.com/chzyer/readline v0.0.0-20171208011716-f6d7a1f6fbf3/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
github.com/chzyer/readline v0.0.0-20171208011716-f6d7a1f6fbf3/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||||
|
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd h1:qMd81Ts1T2OTKmB4acZcyKaMtRnY5Y44NuXGX2GFJ1w=
|
||||||
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
||||||
github.com/coreos/bbolt v1.3.2 h1:wZwiHHUieZCquLkDL0B8UhzreNWsPHooDAG3q34zk0s=
|
github.com/coreos/bbolt v1.3.2 h1:wZwiHHUieZCquLkDL0B8UhzreNWsPHooDAG3q34zk0s=
|
||||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||||
|
github.com/coreos/bbolt v1.3.3 h1:n6AiVyVRKQFNb6mJlwESEvvLoDyiTzXX7ORAUlkeBdY=
|
||||||
github.com/coreos/bbolt v1.3.3/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
github.com/coreos/bbolt v1.3.3/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||||
github.com/coreos/etcd v3.3.10+incompatible h1:jFneRYjIvLMLhDLCzuTuU4rSJUjRplcJQ7pD7MnhC04=
|
github.com/coreos/etcd v3.3.10+incompatible h1:jFneRYjIvLMLhDLCzuTuU4rSJUjRplcJQ7pD7MnhC04=
|
||||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||||
@ -119,6 +83,7 @@ github.com/coreos/etcd v3.3.15+incompatible h1:+9RjdC18gMxNQVvSiXvObLu29mOFmkgds
|
|||||||
github.com/coreos/etcd v3.3.15+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
github.com/coreos/etcd v3.3.15+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||||
github.com/coreos/go-semver v0.2.0 h1:3Jm3tLmsgAYcjC+4Up7hJrFBPr+n7rAqYeSw/SZazuY=
|
github.com/coreos/go-semver v0.2.0 h1:3Jm3tLmsgAYcjC+4Up7hJrFBPr+n7rAqYeSw/SZazuY=
|
||||||
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||||
|
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
|
||||||
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||||
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||||
github.com/coreos/go-systemd v0.0.0-20181031085051-9002847aa142/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
github.com/coreos/go-systemd v0.0.0-20181031085051-9002847aa142/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||||
@ -129,9 +94,9 @@ github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7
|
|||||||
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
||||||
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
|
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
|
||||||
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
||||||
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
|
||||||
github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548 h1:iwZdTE0PVqJCos1vaoKsclOGD3ADKpshg3SRtYBbwso=
|
github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548 h1:iwZdTE0PVqJCos1vaoKsclOGD3ADKpshg3SRtYBbwso=
|
||||||
github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548/go.mod h1:e6NPNENfs9mPDVNRekM7lKScauxd5kXTr1Mfyig6TDM=
|
github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548/go.mod h1:e6NPNENfs9mPDVNRekM7lKScauxd5kXTr1Mfyig6TDM=
|
||||||
|
github.com/cznic/sortutil v0.0.0-20150617083342-4c7342852e65 h1:hxuZop6tSoOi0sxFzoGGYdRqNrPubyaIf9KoBG9tPiE=
|
||||||
github.com/cznic/sortutil v0.0.0-20150617083342-4c7342852e65/go.mod h1:q2w6Bg5jeox1B+QkJ6Wp/+Vn0G/bo3f1uY7Fn3vivIQ=
|
github.com/cznic/sortutil v0.0.0-20150617083342-4c7342852e65/go.mod h1:q2w6Bg5jeox1B+QkJ6Wp/+Vn0G/bo3f1uY7Fn3vivIQ=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
@ -142,10 +107,7 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
|
|||||||
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f h1:dDxpBYafY/GYpcl+LS4Bn3ziLPuEdGRkRjYAbSlWxSA=
|
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f h1:dDxpBYafY/GYpcl+LS4Bn3ziLPuEdGRkRjYAbSlWxSA=
|
||||||
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
||||||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||||
github.com/dgryski/go-sip13 v0.0.0-20190329191031-25c5027a8c7b/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
|
||||||
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
|
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
|
||||||
github.com/disintegration/imaging v1.6.0 h1:nVPXRUUQ36Z7MNf0O77UzgnOb1mkMMor7lmJMJXc/mA=
|
|
||||||
github.com/disintegration/imaging v1.6.0/go.mod h1:xuIt+sRxDFrHS0drzXUlCJthkJ8k7lkkUojDSR247MQ=
|
|
||||||
github.com/disintegration/imaging v1.6.1 h1:JnBbK6ECIZb1NsWIikP9pd8gIlTIRx7fuDNpU9fsxOE=
|
github.com/disintegration/imaging v1.6.1 h1:JnBbK6ECIZb1NsWIikP9pd8gIlTIRx7fuDNpU9fsxOE=
|
||||||
github.com/disintegration/imaging v1.6.1/go.mod h1:xuIt+sRxDFrHS0drzXUlCJthkJ8k7lkkUojDSR247MQ=
|
github.com/disintegration/imaging v1.6.1/go.mod h1:xuIt+sRxDFrHS0drzXUlCJthkJ8k7lkkUojDSR247MQ=
|
||||||
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||||
@ -160,27 +122,28 @@ github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8
|
|||||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
||||||
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
|
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
|
||||||
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
|
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
|
||||||
|
github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385 h1:clC1lXBpe2kTj2VHdaIu9ajZQe4kcEY9j0NsnDDBZ3o=
|
||||||
github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM=
|
github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM=
|
||||||
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
|
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a h1:yDWHCSQ40h88yih2JAcL6Ls/kVkSE8GFACTGVnMPruw=
|
||||||
github.com/envoyproxy/go-control-plane v0.8.6/go.mod h1:XB9+ce7x+IrsjgIVnRnql0O61gj/np0/bGDfhJI3sCU=
|
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a/go.mod h1:7Ga40egUymuWXxAe151lTNnCv97MddSOVsjpPPkityA=
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.0.0-20190405222122-d6164de49109/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
github.com/facebookgo/stats v0.0.0-20151006221625-1b76add642e4 h1:0YtRCqIZs2+Tz49QuH6cJVw/IFqzo39gEqZ0iYLxD2M=
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
github.com/facebookgo/stats v0.0.0-20151006221625-1b76add642e4/go.mod h1:vsJz7uE339KUCpBXx3JAJzSRH7Uk4iGGyJzR529qDIA=
|
||||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||||
github.com/fortytw2/leaktest v1.2.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
|
github.com/fortytw2/leaktest v1.2.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
|
||||||
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
|
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
|
||||||
|
github.com/frankban/quicktest v1.7.2 h1:2QxQoC1TS09S7fhCPsrvqYdvP1H5M1P1ih5ABm3BTYk=
|
||||||
|
github.com/frankban/quicktest v1.7.2/go.mod h1:jaStnuzAqU1AJdCO0l53JDCJrVDKcS03DbaAcR7Ks/o=
|
||||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||||
github.com/gabriel-vasile/mimetype v0.3.15 h1:qSK8E/VAF4pxtkxqarYRAVvYNDyCFJXKAYAyGNcESII=
|
|
||||||
github.com/gabriel-vasile/mimetype v0.3.15/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI=
|
|
||||||
github.com/gabriel-vasile/mimetype v0.3.17 h1:NGWgggJJqTofUcTV1E7hkk2zVjZ54EfJa1z5O3z6By4=
|
github.com/gabriel-vasile/mimetype v0.3.17 h1:NGWgggJJqTofUcTV1E7hkk2zVjZ54EfJa1z5O3z6By4=
|
||||||
github.com/gabriel-vasile/mimetype v0.3.17/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI=
|
github.com/gabriel-vasile/mimetype v0.3.17/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI=
|
||||||
|
github.com/gabriel-vasile/mimetype v1.0.0 h1:0QKnAQQhG6oOsb4GK7iPlet7RtjHi9us8RF/nXoTxhI=
|
||||||
|
github.com/gabriel-vasile/mimetype v1.0.0/go.mod h1:6CDPel/o/3/s4+bp6kIbsWATq8pmgOisOPG40CJa6To=
|
||||||
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
||||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||||
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32/go.mod h1:GIjDIg/heH5DOkXY3YJ/wNhfHsQHoXGjl8G8amsYQ1I=
|
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32/go.mod h1:GIjDIg/heH5DOkXY3YJ/wNhfHsQHoXGjl8G8amsYQ1I=
|
||||||
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
|
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
|
||||||
github.com/go-ini/ini v1.46.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
|
|
||||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
|
||||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||||
github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E=
|
github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E=
|
||||||
@ -192,12 +155,8 @@ github.com/go-sql-driver/mysql v0.0.0-20170715192408-3955978caca4/go.mod h1:zAC/
|
|||||||
github.com/go-sql-driver/mysql v1.4.1 h1:g24URVg0OFbNUTx9qqY1IRZ9D9z3iPyi5zKhQZpNwpA=
|
github.com/go-sql-driver/mysql v1.4.1 h1:g24URVg0OFbNUTx9qqY1IRZ9D9z3iPyi5zKhQZpNwpA=
|
||||||
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||||
github.com/gocql/gocql v0.0.0-20190717234527-2ba2dd7440dc h1:m9VsbhR3h7mWKHLh5a+Q8LvBdWEjA6dgY1arxhxvQrU=
|
|
||||||
github.com/gocql/gocql v0.0.0-20190717234527-2ba2dd7440dc/go.mod h1:Q7Sru5153KG8D9zwueuQJB3ccJf9/bIwF/x8b3oKgT8=
|
|
||||||
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6 h1:P66kRWyEoIx6URKgAC3ijx9jo9gEid7bEhLQ/Z0G65A=
|
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6 h1:P66kRWyEoIx6URKgAC3ijx9jo9gEid7bEhLQ/Z0G65A=
|
||||||
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6/go.mod h1:Q7Sru5153KG8D9zwueuQJB3ccJf9/bIwF/x8b3oKgT8=
|
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6/go.mod h1:Q7Sru5153KG8D9zwueuQJB3ccJf9/bIwF/x8b3oKgT8=
|
||||||
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
|
|
||||||
github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
|
|
||||||
github.com/gogo/protobuf v0.0.0-20180717141946-636bf0302bc9/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
github.com/gogo/protobuf v0.0.0-20180717141946-636bf0302bc9/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||||
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||||
@ -212,6 +171,7 @@ github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4er
|
|||||||
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef h1:veQD95Isof8w9/WXiA+pa3tz3fJXkt5B7QaRBrM62gk=
|
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef h1:veQD95Isof8w9/WXiA+pa3tz3fJXkt5B7QaRBrM62gk=
|
||||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||||
|
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I=
|
||||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||||
@ -232,8 +192,11 @@ github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ
|
|||||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||||
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
|
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
|
||||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||||
|
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
|
||||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||||
|
github.com/google/go-replayers/grpcreplay v0.1.0 h1:eNb1y9rZFmY4ax45uEEECSa8fsxGRU+8Bil52ASAwic=
|
||||||
github.com/google/go-replayers/grpcreplay v0.1.0/go.mod h1:8Ig2Idjpr6gifRd6pNVggX6TC1Zw6Jx74AKp7QNH2QE=
|
github.com/google/go-replayers/grpcreplay v0.1.0/go.mod h1:8Ig2Idjpr6gifRd6pNVggX6TC1Zw6Jx74AKp7QNH2QE=
|
||||||
|
github.com/google/go-replayers/httpreplay v0.1.0 h1:AX7FUb4BjrrzNvblr/OlgwrmFiep6soj5K2QSDW7BGk=
|
||||||
github.com/google/go-replayers/httpreplay v0.1.0/go.mod h1:YKZViNhiGgqdBlUbI2MwGpq4pXxNmhJLPHQ7cv2b5no=
|
github.com/google/go-replayers/httpreplay v0.1.0/go.mod h1:YKZViNhiGgqdBlUbI2MwGpq4pXxNmhJLPHQ7cv2b5no=
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||||
@ -241,15 +204,11 @@ github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible h1:x
|
|||||||
github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||||
github.com/google/pprof v0.0.0-20190723021845-34ac40c74b70/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
|
||||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
|
||||||
github.com/google/shlex v0.0.0-20181106134648-c34317bd91bf/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
|
github.com/google/shlex v0.0.0-20181106134648-c34317bd91bf/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
|
||||||
github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
|
github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
|
||||||
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/google/wire v0.2.2 h1:fSIRzE/K12IaNgV6X0173X/oLrTwHKRiMcFZhiDrN3s=
|
|
||||||
github.com/google/wire v0.2.2/go.mod h1:7FHVg6mFpFQrjeUZrm+BaD50N5jnDKm50uVPTpyYOmU=
|
|
||||||
github.com/google/wire v0.3.0 h1:imGQZGEVEHpje5056+K+cgdO72p0LQv2xIIFXNGUf60=
|
github.com/google/wire v0.3.0 h1:imGQZGEVEHpje5056+K+cgdO72p0LQv2xIIFXNGUf60=
|
||||||
github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s=
|
github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s=
|
||||||
github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww=
|
github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww=
|
||||||
@ -267,6 +226,7 @@ github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY
|
|||||||
github.com/gorilla/websocket v1.2.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
github.com/gorilla/websocket v1.2.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||||
github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q=
|
github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q=
|
||||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||||
|
github.com/gorilla/websocket v1.4.1 h1:q7AeDBpnBk8AogcD4DSag/Ukw/KV+YhzLj2bP5HvKCM=
|
||||||
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0 h1:Iju5GlWwrvL6UBg4zJJt3btmonfrMlCDdsejg4CZE7c=
|
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0 h1:Iju5GlWwrvL6UBg4zJJt3btmonfrMlCDdsejg4CZE7c=
|
||||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||||
@ -280,7 +240,7 @@ github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t
|
|||||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0 h1:bM6ZAFZmc/wPFaRDi0d5L7hGEZEx/2u+Tmr2evNHDiI=
|
github.com/grpc-ecosystem/grpc-gateway v1.9.0 h1:bM6ZAFZmc/wPFaRDi0d5L7hGEZEx/2u+Tmr2evNHDiI=
|
||||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||||
github.com/grpc-ecosystem/grpc-gateway v1.9.2/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
github.com/grpc-ecosystem/grpc-gateway v1.9.2/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||||
github.com/grpc-ecosystem/grpc-gateway v1.9.4/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
github.com/grpc-ecosystem/grpc-gateway v1.11.0 h1:aT5ISUniaOTErogCQ+4pGoYNBB6rm6Fq3g1v8QwYGas=
|
||||||
github.com/grpc-ecosystem/grpc-gateway v1.11.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
github.com/grpc-ecosystem/grpc-gateway v1.11.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||||
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
|
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
|
||||||
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
|
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
|
||||||
@ -305,15 +265,14 @@ github.com/jcmturner/gofork v1.0.0/go.mod h1:MK8+TM0La+2rjBD4jE12Kj1pCCxK7d2LK/U
|
|||||||
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||||
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
|
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
|
||||||
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||||
github.com/joeslay/seaweedfs v0.0.0-20190912104409-d8c34b032fb6/go.mod h1:ljVry+CyFSNBLlKiell2UlxOKCvXXHjyBhiGDzXa+0c=
|
|
||||||
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
|
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
|
||||||
github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo=
|
github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo=
|
||||||
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
||||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||||
|
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
|
||||||
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||||
github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9 h1:hJix6idebFclqlfZCHE7EUX7uqLCyb70nHNHH1XKGBg=
|
github.com/juju/ratelimit v1.0.1 h1:+7AIFJVQ0EQgq/K9+0Krm7m530Du7tIz0METWzN0RgY=
|
||||||
github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q=
|
|
||||||
github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk=
|
github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk=
|
||||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||||
github.com/karlseguin/ccache v2.0.3+incompatible h1:j68C9tWOROiOLWTS/kCGg9IcJG+ACqn5+0+t8Oh83UU=
|
github.com/karlseguin/ccache v2.0.3+incompatible h1:j68C9tWOROiOLWTS/kCGg9IcJG+ACqn5+0+t8Oh83UU=
|
||||||
@ -339,17 +298,13 @@ github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
|||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pty v1.0.0/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.0.0/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
|
|
||||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/kurin/blazer v0.5.3 h1:SAgYv0TKU0kN/ETfO5ExjNAPyMt2FocO2s/UlCHfjAk=
|
github.com/kurin/blazer v0.5.3 h1:SAgYv0TKU0kN/ETfO5ExjNAPyMt2FocO2s/UlCHfjAk=
|
||||||
github.com/kurin/blazer v0.5.3/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
|
github.com/kurin/blazer v0.5.3/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
|
||||||
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
|
||||||
github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||||
github.com/lib/pq v1.2.0 h1:LXpIM/LZ5xGFhOpXAQUIMM1HdyqzVYM13zNdjCEEcA0=
|
github.com/lib/pq v1.2.0 h1:LXpIM/LZ5xGFhOpXAQUIMM1HdyqzVYM13zNdjCEEcA0=
|
||||||
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||||
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
|
|
||||||
github.com/lyft/protoc-gen-validate v0.1.0/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
|
|
||||||
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
|
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
|
||||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||||
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
|
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
|
||||||
@ -363,7 +318,6 @@ github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb h1:hXqqXzQtJbENrs
|
|||||||
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
|
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
|
||||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||||
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||||
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
|
|
||||||
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
||||||
github.com/mattn/go-runewidth v0.0.3 h1:a+kO+98RDGEfo6asOGMmpodZq4FNtnGP54yps8BzLR4=
|
github.com/mattn/go-runewidth v0.0.3 h1:a+kO+98RDGEfo6asOGMmpodZq4FNtnGP54yps8BzLR4=
|
||||||
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
||||||
@ -377,20 +331,20 @@ github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
|
|||||||
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
|
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
|
||||||
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||||
|
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
|
||||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||||
github.com/montanaflynn/stats v0.0.0-20151014174947-eeaced052adb/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
github.com/montanaflynn/stats v0.0.0-20151014174947-eeaced052adb/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
||||||
|
github.com/montanaflynn/stats v0.0.0-20180911141734-db72e6cae808 h1:pmpDGKLw4n82EtrNiLqB+xSz/JQwFOaZuMALYUHwX5s=
|
||||||
github.com/montanaflynn/stats v0.0.0-20180911141734-db72e6cae808/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
github.com/montanaflynn/stats v0.0.0-20180911141734-db72e6cae808/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
|
||||||
github.com/nats-io/gnatsd v1.4.1 h1:RconcfDeWpKCD6QIIwiVFcvForlXpWeJP7i5/lDLy44=
|
|
||||||
github.com/nats-io/gnatsd v1.4.1/go.mod h1:nqco77VO78hLCJpIcVfygDP2rPGfsEHkGTUk94uh5DQ=
|
|
||||||
github.com/nats-io/go-nats v1.7.2 h1:cJujlwCYR8iMz5ofZSD/p2WLW8FabhkQ2lIEVbSvNSA=
|
|
||||||
github.com/nats-io/go-nats v1.7.2/go.mod h1:+t7RHT5ApZebkrQdnn6AhQJmhJJiKAvJUio1PiiCtj0=
|
|
||||||
github.com/nats-io/jwt v0.2.6/go.mod h1:mQxQ0uHQ9FhEVPIcTSKwx2lqZEpXWWcCgA7R6NrWvvY=
|
github.com/nats-io/jwt v0.2.6/go.mod h1:mQxQ0uHQ9FhEVPIcTSKwx2lqZEpXWWcCgA7R6NrWvvY=
|
||||||
|
github.com/nats-io/jwt v0.2.14 h1:wA50KvFz/JXGXMHRygTWsRGh/ixxgC5E3kHvmtGLNf4=
|
||||||
github.com/nats-io/jwt v0.2.14/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
|
github.com/nats-io/jwt v0.2.14/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
|
||||||
github.com/nats-io/nats-server/v2 v2.0.0/go.mod h1:RyVdsHHvY4B6c9pWG+uRLpZ0h0XsqiuKp2XCTurP5LI=
|
github.com/nats-io/nats-server/v2 v2.0.0/go.mod h1:RyVdsHHvY4B6c9pWG+uRLpZ0h0XsqiuKp2XCTurP5LI=
|
||||||
|
github.com/nats-io/nats-server/v2 v2.0.4 h1:XOMeQRbhl1lGNTIctPhih6pTa15NGif54Uas6ZW5q7g=
|
||||||
github.com/nats-io/nats-server/v2 v2.0.4/go.mod h1:AWdGEVbjKRS9ZIx4DSP5eKW48nfFm7q3uiSkP/1KD7M=
|
github.com/nats-io/nats-server/v2 v2.0.4/go.mod h1:AWdGEVbjKRS9ZIx4DSP5eKW48nfFm7q3uiSkP/1KD7M=
|
||||||
github.com/nats-io/nats.go v1.8.1 h1:6lF/f1/NN6kzUDBz6pyvQDEXO39jqXcWRLu/tKjtOUQ=
|
github.com/nats-io/nats.go v1.8.1 h1:6lF/f1/NN6kzUDBz6pyvQDEXO39jqXcWRLu/tKjtOUQ=
|
||||||
github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM=
|
github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM=
|
||||||
@ -400,7 +354,9 @@ github.com/nats-io/nkeys v0.1.0 h1:qMd4+pRHgdr1nAClu+2h/2a5F2TmKcCzjCDazVgRoX4=
|
|||||||
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
|
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
|
||||||
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
|
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
|
||||||
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
|
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
|
||||||
|
github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7 h1:7KAv7KMGTTqSmYZtNdcNTgsos+vFzULLwyElndwn+5c=
|
||||||
github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7/go.mod h1:iWMfgwqYW+e8n5lC/jjNEhwcjbRDpl5NT7n2h+4UNcI=
|
github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7/go.mod h1:iWMfgwqYW+e8n5lC/jjNEhwcjbRDpl5NT7n2h+4UNcI=
|
||||||
|
github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef h1:K0Fn+DoFqNqktdZtdV3bPQ/0cuYh2H4rkg0tytX/07k=
|
||||||
github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef/go.mod h1:7WjlapSfwQyo6LNmIvEWzsW1hbBQfpUO4JWnuQRmva8=
|
github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef/go.mod h1:7WjlapSfwQyo6LNmIvEWzsW1hbBQfpUO4JWnuQRmva8=
|
||||||
github.com/nicksnyder/go-i18n v1.10.0/go.mod h1:HrK7VCrbOvQoUAQ7Vpy7i87N7JZZZ7R2xBGjv0j365Q=
|
github.com/nicksnyder/go-i18n v1.10.0/go.mod h1:HrK7VCrbOvQoUAQ7Vpy7i87N7JZZZ7R2xBGjv0j365Q=
|
||||||
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
|
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
|
||||||
@ -408,17 +364,18 @@ github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:v
|
|||||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||||
github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs=
|
github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs=
|
||||||
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||||
|
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
|
||||||
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||||
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||||
github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU=
|
github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU=
|
||||||
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||||
|
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
|
||||||
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||||
|
github.com/opentracing/basictracer-go v1.0.0 h1:YyUAhaEfjoWXclZVJ9sGoNct7j4TVk7lZWlQw5UXuoo=
|
||||||
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
|
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
|
||||||
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||||
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
|
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
|
||||||
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||||
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
|
|
||||||
github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
|
|
||||||
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
|
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
|
||||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||||
github.com/pelletier/go-toml v1.3.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
|
github.com/pelletier/go-toml v1.3.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
|
||||||
@ -427,12 +384,11 @@ github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUr
|
|||||||
github.com/peterh/liner v1.1.0 h1:f+aAedNJA6uk7+6rXsYBnhdo4Xux7ESLe+kcuVUF5os=
|
github.com/peterh/liner v1.1.0 h1:f+aAedNJA6uk7+6rXsYBnhdo4Xux7ESLe+kcuVUF5os=
|
||||||
github.com/peterh/liner v1.1.0/go.mod h1:CRroGNssyjTd/qIG2FyxByd2S8JEAZXBl4qUrZf8GS0=
|
github.com/peterh/liner v1.1.0/go.mod h1:CRroGNssyjTd/qIG2FyxByd2S8JEAZXBl4qUrZf8GS0=
|
||||||
github.com/pierrec/lz4 v0.0.0-20190327172049-315a67e90e41/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
|
github.com/pierrec/lz4 v0.0.0-20190327172049-315a67e90e41/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
|
||||||
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
|
|
||||||
github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I=
|
|
||||||
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
|
||||||
github.com/pierrec/lz4 v2.2.7+incompatible h1:Eerk9aiqeZo2QzsbWOAsELUf9ddvAxEdMY9LYze/DEc=
|
github.com/pierrec/lz4 v2.2.7+incompatible h1:Eerk9aiqeZo2QzsbWOAsELUf9ddvAxEdMY9LYze/DEc=
|
||||||
github.com/pierrec/lz4 v2.2.7+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
github.com/pierrec/lz4 v2.2.7+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||||
|
github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8 h1:USx2/E1bX46VG32FIw034Au6seQ2fY9NEILmNh/UlQg=
|
||||||
github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8/go.mod h1:B1+S9LNcuMyLH/4HMTViQOJevkGiik3wW2AN9zb2fNQ=
|
github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8/go.mod h1:B1+S9LNcuMyLH/4HMTViQOJevkGiik3wW2AN9zb2fNQ=
|
||||||
|
github.com/pingcap/errcode v0.0.0-20180921232412-a1a7271709d9 h1:KH4f4Si9XK6/IW50HtoaiLIFHGkapOM6w83za47UYik=
|
||||||
github.com/pingcap/errcode v0.0.0-20180921232412-a1a7271709d9/go.mod h1:4b2X8xSqxIroj/IZ9MX/VGZhAwc11wB9wRIzHvz6SeM=
|
github.com/pingcap/errcode v0.0.0-20180921232412-a1a7271709d9/go.mod h1:4b2X8xSqxIroj/IZ9MX/VGZhAwc11wB9wRIzHvz6SeM=
|
||||||
github.com/pingcap/errors v0.11.0/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
|
github.com/pingcap/errors v0.11.0/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
|
||||||
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
|
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
|
||||||
@ -452,11 +408,8 @@ github.com/pingcap/parser v0.0.0-20191021083151-7c64f78a5100 h1:TRyps2d+2TsJv1Vk
|
|||||||
github.com/pingcap/parser v0.0.0-20191021083151-7c64f78a5100/go.mod h1:1FNvfp9+J0wvc4kl8eGNh7Rqrxveg15jJoWo/a0uHwA=
|
github.com/pingcap/parser v0.0.0-20191021083151-7c64f78a5100/go.mod h1:1FNvfp9+J0wvc4kl8eGNh7Rqrxveg15jJoWo/a0uHwA=
|
||||||
github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0 h1:GIEq+wZfrl2bcJxpuSrEH4H7/nlf5YdmpS+dU9lNIt8=
|
github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0 h1:GIEq+wZfrl2bcJxpuSrEH4H7/nlf5YdmpS+dU9lNIt8=
|
||||||
github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0/go.mod h1:G/6rJpnYwM0LKMec2rI82/5Kg6GaZMvlfB+e6/tvYmI=
|
github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0/go.mod h1:G/6rJpnYwM0LKMec2rI82/5Kg6GaZMvlfB+e6/tvYmI=
|
||||||
github.com/pingcap/pd v2.1.17+incompatible h1:mpfJYffRC14jeAfiq0jbHkqXVc8ZGNV0Lr2xG1sJslw=
|
|
||||||
github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b h1:6GfcYOX9/CCxPnNOivVxiDYXbZrCHU1mRp691iw9EYs=
|
github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b h1:6GfcYOX9/CCxPnNOivVxiDYXbZrCHU1mRp691iw9EYs=
|
||||||
github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b/go.mod h1:YfrHdQ613A+E2FSugyXOdJmeZQbXNjpXX2doNe8MGj8=
|
github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b/go.mod h1:YfrHdQ613A+E2FSugyXOdJmeZQbXNjpXX2doNe8MGj8=
|
||||||
github.com/pingcap/tidb v2.0.11+incompatible h1:Shz+ry1DzQNsPk1QAejnM+5tgjbwZuzPnIER5aCjQ6c=
|
|
||||||
github.com/pingcap/tidb v2.0.11+incompatible/go.mod h1:I8C6jrPINP2rrVunTRd7C9fRRhQrtR43S1/CL5ix/yQ=
|
|
||||||
github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible h1:MkWCxgZpJBgY2f4HtwWMMFzSBb3+JPzeJgF3VrXE/bU=
|
github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible h1:MkWCxgZpJBgY2f4HtwWMMFzSBb3+JPzeJgF3VrXE/bU=
|
||||||
github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible/go.mod h1:XGdcy9+yqlDSEMTpOXnwf3hiTeqrV6MN/u1se9N8yIM=
|
github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible/go.mod h1:XGdcy9+yqlDSEMTpOXnwf3hiTeqrV6MN/u1se9N8yIM=
|
||||||
github.com/pingcap/tipb v0.0.0-20191015023537-709b39e7f8bb/go.mod h1:RtkHW8WbcNxj8lsbzjaILci01CtYnYbIkQhjyZWrWVI=
|
github.com/pingcap/tipb v0.0.0-20191015023537-709b39e7f8bb/go.mod h1:RtkHW8WbcNxj8lsbzjaILci01CtYnYbIkQhjyZWrWVI=
|
||||||
@ -466,13 +419,11 @@ github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
|
|||||||
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
||||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
|
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
|
||||||
github.com/pkg/profile v1.3.0/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||||
github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||||
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
|
|
||||||
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
|
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
|
||||||
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
|
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
|
||||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||||
@ -481,7 +432,6 @@ github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQ
|
|||||||
github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
|
||||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
|
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
|
||||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
|
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
|
||||||
@ -489,7 +439,6 @@ github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:
|
|||||||
github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||||
github.com/prometheus/common v0.0.0-20181020173914-7e9e6cabbd39/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
github.com/prometheus/common v0.0.0-20181020173914-7e9e6cabbd39/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||||
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
|
||||||
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||||
github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw=
|
github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw=
|
||||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||||
@ -497,7 +446,6 @@ github.com/prometheus/common v0.6.0 h1:kRhiuYSXR3+uv2IbVbZhUxK5zVD/2pp3Gd2PpvPkp
|
|||||||
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
|
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
|
||||||
github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||||
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
|
||||||
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||||
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
|
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
|
||||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||||
@ -505,7 +453,6 @@ github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDa
|
|||||||
github.com/prometheus/procfs v0.0.4 h1:w8DjqFMJDjuVwdZBQoOozr4MVWOnwF7RcL/7uxBjY78=
|
github.com/prometheus/procfs v0.0.4 h1:w8DjqFMJDjuVwdZBQoOozr4MVWOnwF7RcL/7uxBjY78=
|
||||||
github.com/prometheus/procfs v0.0.4/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
github.com/prometheus/procfs v0.0.4/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||||
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
||||||
github.com/prometheus/tsdb v0.10.0/go.mod h1:oi49uRhEe9dPUTlS3JRZOwJuVi6tmh10QSgwXEyGCt4=
|
|
||||||
github.com/rakyll/statik v0.1.6 h1:uICcfUXpgqtw2VopbIncslhAmE5hwc4g20TEyEENBNs=
|
github.com/rakyll/statik v0.1.6 h1:uICcfUXpgqtw2VopbIncslhAmE5hwc4g20TEyEENBNs=
|
||||||
github.com/rakyll/statik v0.1.6/go.mod h1:OEi9wJV/fMUAGx1eNjq75DKDsJVuEv1U0oYdX6GX8Zs=
|
github.com/rakyll/statik v0.1.6/go.mod h1:OEi9wJV/fMUAGx1eNjq75DKDsJVuEv1U0oYdX6GX8Zs=
|
||||||
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a h1:9ZKAASQSHhDYGoxY8uLVpewe1GDZ2vu2Tr/vTdVAkFQ=
|
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a h1:9ZKAASQSHhDYGoxY8uLVpewe1GDZ2vu2Tr/vTdVAkFQ=
|
||||||
@ -516,13 +463,8 @@ github.com/remyoudompheng/bigfft v0.0.0-20190512091148-babf20351dd7/go.mod h1:qq
|
|||||||
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 h1:HQagqIiBmr8YXawX/le3+O26N+vPPC1PtjaF3mwnook=
|
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 h1:HQagqIiBmr8YXawX/le3+O26N+vPPC1PtjaF3mwnook=
|
||||||
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||||
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
|
|
||||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
|
||||||
github.com/rogpeppe/go-internal v1.3.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
|
|
||||||
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd h1:CmH9+J6ZSsIjUK3dcGsnCnO41eRBOnY12zwkn5qVwgc=
|
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd h1:CmH9+J6ZSsIjUK3dcGsnCnO41eRBOnY12zwkn5qVwgc=
|
||||||
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd/go.mod h1:hPqNNc0+uJM6H+SuU8sEs5K5IQeKccPqeSjfgcKGgPk=
|
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd/go.mod h1:hPqNNc0+uJM6H+SuU8sEs5K5IQeKccPqeSjfgcKGgPk=
|
||||||
github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b h1:8O/3dJ2dGfuLVN0bo2B0IdkG0L8cjpmFJ4r8eRQBCi8=
|
|
||||||
github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
|
|
||||||
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff h1:uLd5zBvf5OA67wcVRePHrFt60bR4LSskaVhgVwyk0Jg=
|
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff h1:uLd5zBvf5OA67wcVRePHrFt60bR4LSskaVhgVwyk0Jg=
|
||||||
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff/go.mod h1:cubdLmQFqEUZ9vNJrznhgc3m3VMAJi/nY2Ix2axXkG0=
|
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff/go.mod h1:cubdLmQFqEUZ9vNJrznhgc3m3VMAJi/nY2Ix2axXkG0=
|
||||||
github.com/sergi/go-diff v1.0.1-0.20180205163309-da645544ed44/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
github.com/sergi/go-diff v1.0.1-0.20180205163309-da645544ed44/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
||||||
@ -563,10 +505,12 @@ github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3
|
|||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
|
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
|
||||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
|
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
|
||||||
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
|
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
|
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
|
||||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
github.com/struCoder/pidusage v0.1.2/go.mod h1:pWBlW3YuSwRl6h7R5KbvA4N8oOqe9LjaKW5CwT1SPjI=
|
github.com/struCoder/pidusage v0.1.2/go.mod h1:pWBlW3YuSwRl6h7R5KbvA4N8oOqe9LjaKW5CwT1SPjI=
|
||||||
github.com/syndtr/goleveldb v0.0.0-20180815032940-ae2bd5eed72d/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0=
|
github.com/syndtr/goleveldb v0.0.0-20180815032940-ae2bd5eed72d/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0=
|
||||||
@ -577,16 +521,14 @@ github.com/tidwall/gjson v1.3.2 h1:+7p3qQFaH3fOMXAJSrdZwGKcOO/lYdGS0HqGhPqDdTI=
|
|||||||
github.com/tidwall/gjson v1.3.2/go.mod h1:P256ACg0Mn+j1RXIDXoss50DeIABTYK1PULOJHhxOls=
|
github.com/tidwall/gjson v1.3.2/go.mod h1:P256ACg0Mn+j1RXIDXoss50DeIABTYK1PULOJHhxOls=
|
||||||
github.com/tidwall/match v1.0.1 h1:PnKP62LPNxHKTwvHHZZzdOAOCtsJTjo6dZLCwpKm5xc=
|
github.com/tidwall/match v1.0.1 h1:PnKP62LPNxHKTwvHHZZzdOAOCtsJTjo6dZLCwpKm5xc=
|
||||||
github.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E=
|
github.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E=
|
||||||
github.com/tidwall/pretty v0.0.0-20190325153808-1166b9ac2b65/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
|
||||||
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
|
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
|
||||||
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
||||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
github.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5 h1:LnC5Kc/wtumK+WB441p7ynQJzVuNRJiqddSIE3IlSEQ=
|
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5 h1:LnC5Kc/wtumK+WB441p7ynQJzVuNRJiqddSIE3IlSEQ=
|
||||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||||
github.com/twinj/uuid v1.0.0 h1:fzz7COZnDrXGTAOHGuUGYd6sG+JMq+AoE7+Jlu0przk=
|
|
||||||
github.com/twinj/uuid v1.0.0/go.mod h1:mMgcE1RHFUFqe5AfiwlINXisXfDGro23fWdPUfOMjRY=
|
|
||||||
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
|
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
|
||||||
|
github.com/uber-go/atomic v1.4.0 h1:yOuPqEq4ovnhEjpHmfFwsqBXDYbQeT6Nb0bwD6XnD5o=
|
||||||
github.com/uber-go/atomic v1.4.0/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
|
github.com/uber-go/atomic v1.4.0/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
|
||||||
github.com/uber/jaeger-client-go v2.15.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
|
github.com/uber/jaeger-client-go v2.15.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
|
||||||
github.com/uber/jaeger-client-go v2.17.0+incompatible h1:35tpDuT3k0oBiN/aGoSWuiFaqKgKZSciSMnWrazhSHE=
|
github.com/uber/jaeger-client-go v2.17.0+incompatible h1:35tpDuT3k0oBiN/aGoSWuiFaqKgKZSciSMnWrazhSHE=
|
||||||
@ -597,10 +539,9 @@ github.com/uber/jaeger-lib v2.0.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6
|
|||||||
github.com/ugorji/go v1.1.2/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ=
|
github.com/ugorji/go v1.1.2/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ=
|
||||||
github.com/ugorji/go v1.1.4 h1:j4s+tAvLfL3bZyefP2SEWmhBzmuIlH/eqNuPdFPgngw=
|
github.com/ugorji/go v1.1.4 h1:j4s+tAvLfL3bZyefP2SEWmhBzmuIlH/eqNuPdFPgngw=
|
||||||
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
|
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
|
||||||
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
|
|
||||||
github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43/go.mod h1:iT03XoTwV7xq/+UGwKO3UbC1nNNlopQiY61beSdrtOA=
|
github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43/go.mod h1:iT03XoTwV7xq/+UGwKO3UbC1nNNlopQiY61beSdrtOA=
|
||||||
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
|
|
||||||
github.com/unrolled/render v0.0.0-20171102162132-65450fb6b2d3/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg=
|
github.com/unrolled/render v0.0.0-20171102162132-65450fb6b2d3/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg=
|
||||||
|
github.com/unrolled/render v0.0.0-20180914162206-b9786414de4d h1:ggUgChAeyge4NZ4QUw6lhHsVymzwSDJOZcE0s2X8S20=
|
||||||
github.com/unrolled/render v0.0.0-20180914162206-b9786414de4d/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg=
|
github.com/unrolled/render v0.0.0-20180914162206-b9786414de4d/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg=
|
||||||
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
||||||
github.com/urfave/negroni v0.3.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4=
|
github.com/urfave/negroni v0.3.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4=
|
||||||
@ -618,17 +559,12 @@ github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:
|
|||||||
github.com/yookoala/realpath v1.0.0/go.mod h1:gJJMA9wuX7AcqLy1+ffPatSCySA1FQ2S8Ya9AIoYBpE=
|
github.com/yookoala/realpath v1.0.0/go.mod h1:gJJMA9wuX7AcqLy1+ffPatSCySA1FQ2S8Ya9AIoYBpE=
|
||||||
go.etcd.io/bbolt v1.3.2 h1:Z/90sZLPOeCy2PwprqkFa25PdkusRzaj9P8zm/KNyvk=
|
go.etcd.io/bbolt v1.3.2 h1:Z/90sZLPOeCy2PwprqkFa25PdkusRzaj9P8zm/KNyvk=
|
||||||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||||
|
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
|
||||||
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||||
go.etcd.io/etcd v0.0.0-20190320044326-77d4b742cdbf/go.mod h1:KSGwdbiFchh5KIC9My2+ZVl5/3ANcwohw50dpPwa2cw=
|
go.etcd.io/etcd v0.0.0-20190320044326-77d4b742cdbf/go.mod h1:KSGwdbiFchh5KIC9My2+ZVl5/3ANcwohw50dpPwa2cw=
|
||||||
go.etcd.io/etcd v3.3.13+incompatible h1:jCejD5EMnlGxFvcGRyEV4VGlENZc7oPQX6o0t7n3xbw=
|
|
||||||
go.etcd.io/etcd v3.3.13+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI=
|
|
||||||
go.etcd.io/etcd v3.3.15+incompatible h1:0VpOVCF6EFnJptt8Jh0EWEHO4j2fepyV1fpu9xz/UoQ=
|
go.etcd.io/etcd v3.3.15+incompatible h1:0VpOVCF6EFnJptt8Jh0EWEHO4j2fepyV1fpu9xz/UoQ=
|
||||||
go.etcd.io/etcd v3.3.15+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI=
|
go.etcd.io/etcd v3.3.15+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI=
|
||||||
go.mongodb.org/mongo-driver v1.0.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
|
|
||||||
go.mongodb.org/mongo-driver v1.1.0/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
|
|
||||||
go.opencensus.io v0.15.0/go.mod h1:UffZAU+4sDEINUGP/B7UfBBkq4fqLu9zXAX7ke6CHW0=
|
go.opencensus.io v0.15.0/go.mod h1:UffZAU+4sDEINUGP/B7UfBBkq4fqLu9zXAX7ke6CHW0=
|
||||||
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
|
|
||||||
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
|
|
||||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||||
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
|
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
|
||||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||||
@ -642,44 +578,29 @@ go.uber.org/multierr v1.2.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/
|
|||||||
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||||
go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM=
|
go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM=
|
||||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||||
gocloud.dev v0.15.0 h1:Tl8dkOHWVZiYBYPxG2ouhpfmluoQGt3mY323DaAHaC8=
|
|
||||||
gocloud.dev v0.15.0/go.mod h1:ShXCyJaGrJu9y/7a6+DSCyBb9MFGZ1P5wwPa0Wu6w34=
|
|
||||||
gocloud.dev v0.16.0 h1:hWeaQWxamGerwsU7B9xSWvUjx0p7TwG8fcHro2TzbbM=
|
gocloud.dev v0.16.0 h1:hWeaQWxamGerwsU7B9xSWvUjx0p7TwG8fcHro2TzbbM=
|
||||||
gocloud.dev v0.16.0/go.mod h1:xWGXD8t7bEhqPIuyAUFyXV9qHn+PvEY2F2GKPh7i/O0=
|
gocloud.dev v0.16.0/go.mod h1:xWGXD8t7bEhqPIuyAUFyXV9qHn+PvEY2F2GKPh7i/O0=
|
||||||
gocloud.dev/pubsub/natspubsub v0.15.0 h1:JarkPUp9xX9+A1v7VgZeY72bATZIQUzkyP1ANJ+bwU4=
|
|
||||||
gocloud.dev/pubsub/natspubsub v0.15.0/go.mod h1:zgjFYbmxa3Tiqlfp9BnZBULo+/lpK8vZPZ3YMG2MrkI=
|
|
||||||
gocloud.dev/pubsub/natspubsub v0.16.0 h1:MoBGXULDzb1fVaZsGWO5cUCgr6yoI/DHhau8OPGaGEI=
|
gocloud.dev/pubsub/natspubsub v0.16.0 h1:MoBGXULDzb1fVaZsGWO5cUCgr6yoI/DHhau8OPGaGEI=
|
||||||
gocloud.dev/pubsub/natspubsub v0.16.0/go.mod h1:0n7pT7PkLMClBUHDrOkHfOFVr/o/6kawNMwsyAbwadI=
|
gocloud.dev/pubsub/natspubsub v0.16.0/go.mod h1:0n7pT7PkLMClBUHDrOkHfOFVr/o/6kawNMwsyAbwadI=
|
||||||
gocloud.dev/pubsub/rabbitpubsub v0.15.0 h1:Kl+NAY6nt1bUYZXQIbtCr/seoivwhGo7uc0L9XmOA+g=
|
|
||||||
gocloud.dev/pubsub/rabbitpubsub v0.15.0/go.mod h1:LGg5Acwcpry+GeLNaA01xm0Ij43YUis6kht2qRX2tg0=
|
|
||||||
gocloud.dev/pubsub/rabbitpubsub v0.16.0 h1:Bkv2njMSl2tmT3tGbvbwpiIDAXBIpqzP9dmts+rhD4E=
|
gocloud.dev/pubsub/rabbitpubsub v0.16.0 h1:Bkv2njMSl2tmT3tGbvbwpiIDAXBIpqzP9dmts+rhD4E=
|
||||||
gocloud.dev/pubsub/rabbitpubsub v0.16.0/go.mod h1:JJVdUUIqwgaaMJg/1xHQza0g4sI/4KHHSNiGE+pn4JM=
|
gocloud.dev/pubsub/rabbitpubsub v0.16.0/go.mod h1:JJVdUUIqwgaaMJg/1xHQza0g4sI/4KHHSNiGE+pn4JM=
|
||||||
golang.org/x/crypto v0.0.0-20180608092829-8ac0e0d97ce4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
golang.org/x/crypto v0.0.0-20180608092829-8ac0e0d97ce4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||||
golang.org/x/crypto v0.0.0-20181001203147-e3636079e1a4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
|
||||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
golang.org/x/crypto v0.0.0-20190404164418-38d8ce5564a5/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
|
golang.org/x/crypto v0.0.0-20190404164418-38d8ce5564a5/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
|
||||||
golang.org/x/crypto v0.0.0-20190422183909-d864b10871cd/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
|
||||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
|
||||||
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5 h1:58fnuSXlxZmFdJyvtTFVmVhcMLU6v5fEb/ok4wyqtNU=
|
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5 h1:58fnuSXlxZmFdJyvtTFVmVhcMLU6v5fEb/ok4wyqtNU=
|
||||||
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20190829043050-9756ffdc2472 h1:Gv7RPwsi3eZ2Fgewe3CBsuOebPwO27PoXzRpJPsvSSM=
|
|
||||||
golang.org/x/crypto v0.0.0-20190829043050-9756ffdc2472/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
|
||||||
golang.org/x/crypto v0.0.0-20190909091759-094676da4a83/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20190909091759-094676da4a83/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU=
|
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU=
|
||||||
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
|
||||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||||
golang.org/x/exp v0.0.0-20190731235908-ec7cb31e5a56/go.mod h1:JhuoJpWY28nO4Vef9tZUw9qufEGTyX1+7lmHxV5q5G4=
|
|
||||||
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
|
|
||||||
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
|
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
|
||||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067 h1:KYGJGHOQy8oSi1fDlSpcZF0+juKwk/hEMv5SiwHogR0=
|
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067 h1:KYGJGHOQy8oSi1fDlSpcZF0+juKwk/hEMv5SiwHogR0=
|
||||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
|
||||||
golang.org/x/image v0.0.0-20190829233526-b3c06291d021 h1:j6QOxNFMpEL1wIQX6TUdBPNfGZKmBOJS/vfSm8a7tdM=
|
golang.org/x/image v0.0.0-20190829233526-b3c06291d021 h1:j6QOxNFMpEL1wIQX6TUdBPNfGZKmBOJS/vfSm8a7tdM=
|
||||||
golang.org/x/image v0.0.0-20190829233526-b3c06291d021/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
golang.org/x/image v0.0.0-20190829233526-b3c06291d021/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||||
@ -688,26 +609,16 @@ golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTk
|
|||||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||||
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||||
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
||||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
|
||||||
golang.org/x/mobile v0.0.0-20190830201351-c6da95954960/go.mod h1:mJOp/i0LXPxJZ9weeIadcPqKVfS05Ai7m6/t9z1Hs/Y=
|
|
||||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
|
||||||
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
|
|
||||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
|
||||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
|
||||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20190322120337-addf6b3196f6/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.0.0-20190420063019-afa5a82059c6/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20190424112056-4829fb13d2c6/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||||
@ -715,17 +626,10 @@ golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR
|
|||||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
|
||||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80 h1:Ao/3l156eZf2AW5wK8a7/smtodRU+gha3+BeqJ69lRk=
|
|
||||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
|
||||||
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
|
||||||
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM=
|
|
||||||
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
|
||||||
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b h1:XfVGCX+0T4WOStkaOsJRllbsiImhB2jgVBGc9L0lPGc=
|
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b h1:XfVGCX+0T4WOStkaOsJRllbsiImhB2jgVBGc9L0lPGc=
|
||||||
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
golang.org/x/oauth2 v0.0.0-20190319182350-c85d3e98c914/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
|
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
|
||||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
@ -740,7 +644,6 @@ golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5h
|
|||||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
@ -749,24 +652,17 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190508220229-2d0786266e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190620070143-6f217b454f45/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190620070143-6f217b454f45/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
|
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
|
||||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190712062909-fae7ac547cb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190804053845-51ab0e2deafa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20190830142957-1e83adbbebd0 h1:7z820YPX9pxWR59qM7BE5+fglp4D/mKqAwCvGt11b+8=
|
|
||||||
golang.org/x/sys v0.0.0-20190830142957-1e83adbbebd0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20190909082730-f460065e899a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190909082730-f460065e899a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b h1:3S2h5FadpNr0zUUCVZjlKIEYF+KaX/OBplTGo89CYHI=
|
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b h1:3S2h5FadpNr0zUUCVZjlKIEYF+KaX/OBplTGo89CYHI=
|
||||||
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
|
||||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
@ -774,7 +670,6 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb
|
|||||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
|
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
|
||||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
@ -787,22 +682,12 @@ golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBn
|
|||||||
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||||
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
|
||||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||||
golang.org/x/tools v0.0.0-20190724185037-8aa4eac1a7c1 h1:JwHzEZwWOyWUIR+OxPKGQGUfuOp/feyTesu6DEwqvsM=
|
|
||||||
golang.org/x/tools v0.0.0-20190724185037-8aa4eac1a7c1/go.mod h1:jcCCGcm9btYwXyDqrUWc6MKQKKGJCWEQ3AfLSRIbEuI=
|
|
||||||
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
|
||||||
golang.org/x/tools v0.0.0-20190830223141-573d9926052a h1:XAHT1kdPpnU8Hk+FPi42KZFhtNFEk4vBg1U4OmIeHTU=
|
|
||||||
golang.org/x/tools v0.0.0-20190830223141-573d9926052a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
|
||||||
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110 h1:6S6bidS7O4yAwA5ORRbRIjvNQ9tGbLd5e+LRIaTeVDQ=
|
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110 h1:6S6bidS7O4yAwA5ORRbRIjvNQ9tGbLd5e+LRIaTeVDQ=
|
||||||
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373 h1:PPwnA7z1Pjf7XYaBP9GL1VAMZmcIWyFz7QCMSIIa3Bg=
|
|
||||||
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
|
||||||
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
|
|
||||||
google.golang.org/api v0.3.2/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
|
|
||||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||||
google.golang.org/api v0.5.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
google.golang.org/api v0.5.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||||
google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4=
|
google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4=
|
||||||
@ -816,37 +701,27 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
|
|||||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||||
google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
|
google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
|
||||||
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
||||||
|
google.golang.org/appengine v1.6.2 h1:j8RI1yW0SkI+paT6uGwMlrMI/6zwYA6/CFil8rxOzGI=
|
||||||
google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
||||||
google.golang.org/genproto v0.0.0-20180608181217-32ee49c4dd80/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
google.golang.org/genproto v0.0.0-20180608181217-32ee49c4dd80/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||||
google.golang.org/genproto v0.0.0-20181004005441-af9cb2a35e7f/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
google.golang.org/genproto v0.0.0-20181004005441-af9cb2a35e7f/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||||
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
|
||||||
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||||
google.golang.org/genproto v0.0.0-20190508193815-b515fa19cec8/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
google.golang.org/genproto v0.0.0-20190508193815-b515fa19cec8/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||||
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
|
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
|
||||||
google.golang.org/genproto v0.0.0-20190620144150-6af8c5fc6601/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
|
google.golang.org/genproto v0.0.0-20190620144150-6af8c5fc6601/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
|
||||||
google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610 h1:Ygq9/SRJX9+dU0WCIICM8RkWvDw03lvB77hrhJnpxfU=
|
|
||||||
google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
|
||||||
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
|
|
||||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
|
||||||
google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514 h1:oFSK4421fpCKRrpzIpybyBVWyht05NegY9+L/3TLAZs=
|
google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514 h1:oFSK4421fpCKRrpzIpybyBVWyht05NegY9+L/3TLAZs=
|
||||||
google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||||
google.golang.org/grpc v0.0.0-20180607172857-7a6a684ca69e/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
google.golang.org/grpc v0.0.0-20180607172857-7a6a684ca69e/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||||
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
|
||||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||||
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
|
||||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
|
||||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||||
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||||
google.golang.org/grpc v1.22.0 h1:J0UbZOIrCAl+fpTOf8YLs4dJo8L/owV4LYVtAXQoPkw=
|
|
||||||
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
|
||||||
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
|
||||||
google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A=
|
google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A=
|
||||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||||
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
|
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
|
||||||
@ -857,9 +732,9 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
|
|||||||
gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
|
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
|
||||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
|
||||||
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
||||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||||
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
|
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
|
||||||
@ -888,18 +763,14 @@ gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bl
|
|||||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
||||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
|
||||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||||
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||||
honnef.co/go/tools v0.0.1-2019.2.2/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
|
||||||
istio.io/gogo-genproto v0.0.0-20190731221249-06e20ada0df2/go.mod h1:IjvrbUlRbbw4JCpsgvgihcz9USUwEoNTL/uwMtyV5yk=
|
|
||||||
istio.io/gogo-genproto v0.0.0-20190826122855-47f00599b597/go.mod h1:uKtbae4K9k2rjjX4ToV0l6etglbc1i7gqQ94XdkshzY=
|
|
||||||
pack.ag/amqp v0.8.0/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
|
|
||||||
pack.ag/amqp v0.11.0/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
|
|
||||||
pack.ag/amqp v0.11.2/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
|
pack.ag/amqp v0.11.2/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
|
||||||
pack.ag/amqp v0.12.1/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
|
|
||||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||||
|
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
|
||||||
|
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
|
||||||
|
sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4 h1:VO9oZbbkvTwqLimlQt15QNdOOBArT2dw/bvzsMZBiqQ=
|
||||||
sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
|
sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
|
||||||
sourcegraph.com/sourcegraph/appdash-data v0.0.0-20151005221446-73f23eafcf67/go.mod h1:L5q+DGLGOQFpo1snNEkLOJT2d1YTW66rWNzatr3He1k=
|
sourcegraph.com/sourcegraph/appdash-data v0.0.0-20151005221446-73f23eafcf67/go.mod h1:L5q+DGLGOQFpo1snNEkLOJT2d1YTW66rWNzatr3He1k=
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
<groupId>com.github.chrislusf</groupId>
|
<groupId>com.github.chrislusf</groupId>
|
||||||
<artifactId>seaweedfs-client</artifactId>
|
<artifactId>seaweedfs-client</artifactId>
|
||||||
<version>1.2.3</version>
|
<version>1.2.4</version>
|
||||||
|
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>org.sonatype.oss</groupId>
|
<groupId>org.sonatype.oss</groupId>
|
||||||
|
@ -7,6 +7,7 @@ import java.nio.file.Path;
|
|||||||
import java.nio.file.Paths;
|
import java.nio.file.Paths;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
|
import java.util.Iterator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
public class FilerClient {
|
public class FilerClient {
|
||||||
@ -173,17 +174,18 @@ public class FilerClient {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public List<FilerProto.Entry> listEntries(String path, String entryPrefix, String lastEntryName, int limit) {
|
public List<FilerProto.Entry> listEntries(String path, String entryPrefix, String lastEntryName, int limit) {
|
||||||
List<FilerProto.Entry> entries = filerGrpcClient.getBlockingStub().listEntries(FilerProto.ListEntriesRequest.newBuilder()
|
Iterator<FilerProto.ListEntriesResponse> iter = filerGrpcClient.getBlockingStub().listEntries(FilerProto.ListEntriesRequest.newBuilder()
|
||||||
.setDirectory(path)
|
.setDirectory(path)
|
||||||
.setPrefix(entryPrefix)
|
.setPrefix(entryPrefix)
|
||||||
.setStartFromFileName(lastEntryName)
|
.setStartFromFileName(lastEntryName)
|
||||||
.setLimit(limit)
|
.setLimit(limit)
|
||||||
.build()).getEntriesList();
|
.build());
|
||||||
List<FilerProto.Entry> fixedEntries = new ArrayList<>(entries.size());
|
List<FilerProto.Entry> entries = new ArrayList<>();
|
||||||
for (FilerProto.Entry entry : entries) {
|
while (iter.hasNext()){
|
||||||
fixedEntries.add(fixEntryAfterReading(entry));
|
FilerProto.ListEntriesResponse resp = iter.next();
|
||||||
|
entries.add(fixEntryAfterReading(resp.getEntry()));
|
||||||
}
|
}
|
||||||
return fixedEntries;
|
return entries;
|
||||||
}
|
}
|
||||||
|
|
||||||
public FilerProto.Entry lookupEntry(String directory, String entryName) {
|
public FilerProto.Entry lookupEntry(String directory, String entryName) {
|
||||||
|
@ -63,7 +63,7 @@ public class SeaweedRead {
|
|||||||
if (!chunkView.isFullChunk) {
|
if (!chunkView.isFullChunk) {
|
||||||
request.setHeader(HttpHeaders.ACCEPT_ENCODING, "");
|
request.setHeader(HttpHeaders.ACCEPT_ENCODING, "");
|
||||||
request.setHeader(HttpHeaders.RANGE,
|
request.setHeader(HttpHeaders.RANGE,
|
||||||
String.format("bytes=%d-%d", chunkView.offset, chunkView.offset + chunkView.size));
|
String.format("bytes=%d-%d", chunkView.offset, chunkView.offset + chunkView.size - 1));
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
@ -12,7 +12,7 @@ service SeaweedFiler {
|
|||||||
rpc LookupDirectoryEntry (LookupDirectoryEntryRequest) returns (LookupDirectoryEntryResponse) {
|
rpc LookupDirectoryEntry (LookupDirectoryEntryRequest) returns (LookupDirectoryEntryResponse) {
|
||||||
}
|
}
|
||||||
|
|
||||||
rpc ListEntries (ListEntriesRequest) returns (ListEntriesResponse) {
|
rpc ListEntries (ListEntriesRequest) returns (stream ListEntriesResponse) {
|
||||||
}
|
}
|
||||||
|
|
||||||
rpc CreateEntry (CreateEntryRequest) returns (CreateEntryResponse) {
|
rpc CreateEntry (CreateEntryRequest) returns (CreateEntryResponse) {
|
||||||
@ -64,7 +64,7 @@ message ListEntriesRequest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message ListEntriesResponse {
|
message ListEntriesResponse {
|
||||||
repeated Entry entries = 1;
|
Entry entry = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message Entry {
|
message Entry {
|
||||||
@ -123,9 +123,11 @@ message FuseAttributes {
|
|||||||
message CreateEntryRequest {
|
message CreateEntryRequest {
|
||||||
string directory = 1;
|
string directory = 1;
|
||||||
Entry entry = 2;
|
Entry entry = 2;
|
||||||
|
bool o_excl = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
message CreateEntryResponse {
|
message CreateEntryResponse {
|
||||||
|
string error = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message UpdateEntryRequest {
|
message UpdateEntryRequest {
|
||||||
|
@ -127,7 +127,7 @@
|
|||||||
</snapshotRepository>
|
</snapshotRepository>
|
||||||
</distributionManagement>
|
</distributionManagement>
|
||||||
<properties>
|
<properties>
|
||||||
<seaweedfs.client.version>1.2.3</seaweedfs.client.version>
|
<seaweedfs.client.version>1.2.4</seaweedfs.client.version>
|
||||||
<hadoop.version>2.9.2</hadoop.version>
|
<hadoop.version>2.9.2</hadoop.version>
|
||||||
</properties>
|
</properties>
|
||||||
</project>
|
</project>
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
|
|
||||||
<properties>
|
<properties>
|
||||||
<seaweedfs.client.version>1.2.3</seaweedfs.client.version>
|
<seaweedfs.client.version>1.2.4</seaweedfs.client.version>
|
||||||
<hadoop.version>2.9.2</hadoop.version>
|
<hadoop.version>2.9.2</hadoop.version>
|
||||||
</properties>
|
</properties>
|
||||||
|
|
||||||
|
@ -127,7 +127,7 @@
|
|||||||
</snapshotRepository>
|
</snapshotRepository>
|
||||||
</distributionManagement>
|
</distributionManagement>
|
||||||
<properties>
|
<properties>
|
||||||
<seaweedfs.client.version>1.2.3</seaweedfs.client.version>
|
<seaweedfs.client.version>1.2.4</seaweedfs.client.version>
|
||||||
<hadoop.version>3.1.1</hadoop.version>
|
<hadoop.version>3.1.1</hadoop.version>
|
||||||
</properties>
|
</properties>
|
||||||
</project>
|
</project>
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
|
|
||||||
<properties>
|
<properties>
|
||||||
<seaweedfs.client.version>1.2.3</seaweedfs.client.version>
|
<seaweedfs.client.version>1.2.4</seaweedfs.client.version>
|
||||||
<hadoop.version>3.1.1</hadoop.version>
|
<hadoop.version>3.1.1</hadoop.version>
|
||||||
</properties>
|
</properties>
|
||||||
|
|
||||||
|
@ -8,8 +8,9 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage/backend"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -47,9 +48,10 @@ func main() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("Open Volume Data File [ERROR]: %v", err)
|
glog.Fatalf("Open Volume Data File [ERROR]: %v", err)
|
||||||
}
|
}
|
||||||
defer datFile.Close()
|
datBackend := backend.NewDiskFile(datFile)
|
||||||
|
defer datBackend.Close()
|
||||||
|
|
||||||
superBlock, err := storage.ReadSuperBlock(datFile)
|
superBlock, err := super_block.ReadSuperBlock(datBackend)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("cannot parse existing super block: %v", err)
|
glog.Fatalf("cannot parse existing super block: %v", err)
|
||||||
@ -61,7 +63,7 @@ func main() {
|
|||||||
hasChange := false
|
hasChange := false
|
||||||
|
|
||||||
if *targetReplica != "" {
|
if *targetReplica != "" {
|
||||||
replica, err := storage.NewReplicaPlacementFromString(*targetReplica)
|
replica, err := super_block.NewReplicaPlacementFromString(*targetReplica)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("cannot parse target replica %s: %v", *targetReplica, err)
|
glog.Fatalf("cannot parse target replica %s: %v", *targetReplica, err)
|
||||||
|
@ -9,8 +9,9 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage/backend"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/types"
|
"github.com/chrislusf/seaweedfs/weed/storage/types"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
)
|
)
|
||||||
@ -44,11 +45,13 @@ func main() {
|
|||||||
glog.Fatalf("Read Volume Index %v", err)
|
glog.Fatalf("Read Volume Index %v", err)
|
||||||
}
|
}
|
||||||
defer indexFile.Close()
|
defer indexFile.Close()
|
||||||
datFile, err := os.OpenFile(path.Join(*fixVolumePath, fileName+".dat"), os.O_RDONLY, 0644)
|
datFileName := path.Join(*fixVolumePath, fileName+".dat")
|
||||||
|
datFile, err := os.OpenFile(datFileName, os.O_RDONLY, 0644)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("Read Volume Data %v", err)
|
glog.Fatalf("Read Volume Data %v", err)
|
||||||
}
|
}
|
||||||
defer datFile.Close()
|
datBackend := backend.NewDiskFile(datFile)
|
||||||
|
defer datBackend.Close()
|
||||||
|
|
||||||
newDatFile, err := os.Create(path.Join(*fixVolumePath, fileName+".dat_fixed"))
|
newDatFile, err := os.Create(path.Join(*fixVolumePath, fileName+".dat_fixed"))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -56,21 +59,21 @@ func main() {
|
|||||||
}
|
}
|
||||||
defer newDatFile.Close()
|
defer newDatFile.Close()
|
||||||
|
|
||||||
superBlock, err := storage.ReadSuperBlock(datFile)
|
superBlock, err := super_block.ReadSuperBlock(datBackend)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("Read Volume Data superblock %v", err)
|
glog.Fatalf("Read Volume Data superblock %v", err)
|
||||||
}
|
}
|
||||||
newDatFile.Write(superBlock.Bytes())
|
newDatFile.Write(superBlock.Bytes())
|
||||||
|
|
||||||
iterateEntries(datFile, indexFile, func(n *needle.Needle, offset int64) {
|
iterateEntries(datBackend, indexFile, func(n *needle.Needle, offset int64) {
|
||||||
fmt.Printf("needle id=%v name=%s size=%d dataSize=%d\n", n.Id, string(n.Name), n.Size, n.DataSize)
|
fmt.Printf("needle id=%v name=%s size=%d dataSize=%d\n", n.Id, string(n.Name), n.Size, n.DataSize)
|
||||||
_, s, _, e := n.Append(newDatFile, superBlock.Version())
|
_, s, _, e := n.Append(datBackend, superBlock.Version)
|
||||||
fmt.Printf("size %d error %v\n", s, e)
|
fmt.Printf("size %d error %v\n", s, e)
|
||||||
})
|
})
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func iterateEntries(datFile, idxFile *os.File, visitNeedle func(n *needle.Needle, offset int64)) {
|
func iterateEntries(datBackend backend.BackendStorageFile, idxFile *os.File, visitNeedle func(n *needle.Needle, offset int64)) {
|
||||||
// start to read index file
|
// start to read index file
|
||||||
var readerOffset int64
|
var readerOffset int64
|
||||||
bytes := make([]byte, 16)
|
bytes := make([]byte, 16)
|
||||||
@ -78,14 +81,14 @@ func iterateEntries(datFile, idxFile *os.File, visitNeedle func(n *needle.Needle
|
|||||||
readerOffset += int64(count)
|
readerOffset += int64(count)
|
||||||
|
|
||||||
// start to read dat file
|
// start to read dat file
|
||||||
superBlock, err := storage.ReadSuperBlock(datFile)
|
superBlock, err := super_block.ReadSuperBlock(datBackend)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Printf("cannot read dat file super block: %v", err)
|
fmt.Printf("cannot read dat file super block: %v", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
offset := int64(superBlock.BlockSize())
|
offset := int64(superBlock.BlockSize())
|
||||||
version := superBlock.Version()
|
version := superBlock.Version
|
||||||
n, _, rest, err := needle.ReadNeedleHeader(datFile, version, offset)
|
n, _, rest, err := needle.ReadNeedleHeader(datBackend, version, offset)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Printf("cannot read needle header: %v", err)
|
fmt.Printf("cannot read needle header: %v", err)
|
||||||
return
|
return
|
||||||
@ -115,7 +118,7 @@ func iterateEntries(datFile, idxFile *os.File, visitNeedle func(n *needle.Needle
|
|||||||
fmt.Println("Recovered in f", r)
|
fmt.Println("Recovered in f", r)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
if _, err = n.ReadNeedleBody(datFile, version, offset+int64(types.NeedleHeaderSize), rest); err != nil {
|
if _, err = n.ReadNeedleBody(datBackend, version, offset+int64(types.NeedleHeaderSize), rest); err != nil {
|
||||||
fmt.Printf("cannot read needle body: offset %d body %d %v\n", offset, rest, err)
|
fmt.Printf("cannot read needle body: offset %d body %d %v\n", offset, rest, err)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
@ -127,7 +130,7 @@ func iterateEntries(datFile, idxFile *os.File, visitNeedle func(n *needle.Needle
|
|||||||
|
|
||||||
offset += types.NeedleHeaderSize + rest
|
offset += types.NeedleHeaderSize + rest
|
||||||
//fmt.Printf("==> new entry offset %d\n", offset)
|
//fmt.Printf("==> new entry offset %d\n", offset)
|
||||||
if n, _, rest, err = needle.ReadNeedleHeader(datFile, version, offset); err != nil {
|
if n, _, rest, err = needle.ReadNeedleHeader(datBackend, version, offset); err != nil {
|
||||||
if err == io.EOF {
|
if err == io.EOF {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -8,7 +8,9 @@ import (
|
|||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/backend"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -23,15 +25,16 @@ func Checksum(n *needle.Needle) string {
|
|||||||
|
|
||||||
type VolumeFileScanner4SeeDat struct {
|
type VolumeFileScanner4SeeDat struct {
|
||||||
version needle.Version
|
version needle.Version
|
||||||
block storage.SuperBlock
|
block super_block.SuperBlock
|
||||||
|
|
||||||
dir string
|
dir string
|
||||||
hashes map[string]bool
|
hashes map[string]bool
|
||||||
dat *os.File
|
dat *os.File
|
||||||
|
datBackend backend.BackendStorageFile
|
||||||
}
|
}
|
||||||
|
|
||||||
func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock storage.SuperBlock) error {
|
func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock super_block.SuperBlock) error {
|
||||||
scanner.version = superBlock.Version()
|
scanner.version = superBlock.Version
|
||||||
scanner.block = superBlock
|
scanner.block = superBlock
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
@ -42,13 +45,14 @@ func (scanner *VolumeFileScanner4SeeDat) ReadNeedleBody() bool {
|
|||||||
|
|
||||||
func (scanner *VolumeFileScanner4SeeDat) VisitNeedle(n *needle.Needle, offset int64, needleHeader, needleBody []byte) error {
|
func (scanner *VolumeFileScanner4SeeDat) VisitNeedle(n *needle.Needle, offset int64, needleHeader, needleBody []byte) error {
|
||||||
|
|
||||||
if scanner.dat == nil {
|
if scanner.datBackend == nil {
|
||||||
newDatFile, err := os.Create(filepath.Join(*volumePath, "dat_fixed"))
|
newFileName := filepath.Join(*volumePath, "dat_fixed")
|
||||||
|
newDatFile, err := os.Create(newFileName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("Write New Volume Data %v", err)
|
glog.Fatalf("Write New Volume Data %v", err)
|
||||||
}
|
}
|
||||||
scanner.dat = newDatFile
|
scanner.datBackend = backend.NewDiskFile(newDatFile)
|
||||||
scanner.dat.Write(scanner.block.Bytes())
|
scanner.datBackend.WriteAt(scanner.block.Bytes(), 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
checksum := Checksum(n)
|
checksum := Checksum(n)
|
||||||
@ -59,7 +63,7 @@ func (scanner *VolumeFileScanner4SeeDat) VisitNeedle(n *needle.Needle, offset in
|
|||||||
}
|
}
|
||||||
scanner.hashes[checksum] = true
|
scanner.hashes[checksum] = true
|
||||||
|
|
||||||
_, s, _, e := n.Append(scanner.dat, scanner.version)
|
_, s, _, e := n.Append(scanner.datBackend, scanner.version)
|
||||||
fmt.Printf("size %d error %v\n", s, e)
|
fmt.Printf("size %d error %v\n", s, e)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
@ -7,10 +7,8 @@ import (
|
|||||||
"log"
|
"log"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
|
||||||
"github.com/spf13/viper"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/operation"
|
"github.com/chrislusf/seaweedfs/weed/operation"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -23,7 +21,7 @@ func main() {
|
|||||||
flag.Parse()
|
flag.Parse()
|
||||||
|
|
||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client")
|
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
for i := 0; i < *repeat; i++ {
|
for i := 0; i < *repeat; i++ {
|
||||||
assignResult, err := operation.Assign(*master, grpcDialOption, &operation.VolumeAssignRequest{Count: 1})
|
assignResult, err := operation.Assign(*master, grpcDialOption, &operation.VolumeAssignRequest{Count: 1})
|
||||||
|
@ -7,6 +7,7 @@ import (
|
|||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -19,8 +20,8 @@ type VolumeFileScanner4SeeDat struct {
|
|||||||
version needle.Version
|
version needle.Version
|
||||||
}
|
}
|
||||||
|
|
||||||
func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock storage.SuperBlock) error {
|
func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock super_block.SuperBlock) error {
|
||||||
scanner.version = superBlock.Version()
|
scanner.version = superBlock.Version
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -25,7 +25,7 @@ func main() {
|
|||||||
flag.Parse()
|
flag.Parse()
|
||||||
|
|
||||||
util2.LoadConfiguration("security", false)
|
util2.LoadConfiguration("security", false)
|
||||||
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client")
|
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
vid := needle.VolumeId(*volumeId)
|
vid := needle.VolumeId(*volumeId)
|
||||||
|
|
||||||
|
@ -5,8 +5,8 @@ import (
|
|||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/spf13/viper"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/operation"
|
"github.com/chrislusf/seaweedfs/weed/operation"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage"
|
||||||
@ -64,7 +64,7 @@ var cmdBackup = &Command{
|
|||||||
func runBackup(cmd *Command, args []string) bool {
|
func runBackup(cmd *Command, args []string) bool {
|
||||||
|
|
||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client")
|
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
if *s.volumeId == -1 {
|
if *s.volumeId == -1 {
|
||||||
return false
|
return false
|
||||||
@ -98,15 +98,15 @@ func runBackup(cmd *Command, args []string) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
var replication *storage.ReplicaPlacement
|
var replication *super_block.ReplicaPlacement
|
||||||
if *s.replication != "" {
|
if *s.replication != "" {
|
||||||
replication, err = storage.NewReplicaPlacementFromString(*s.replication)
|
replication, err = super_block.NewReplicaPlacementFromString(*s.replication)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Printf("Error generate volume %d replication %s : %v\n", vid, *s.replication, err)
|
fmt.Printf("Error generate volume %d replication %s : %v\n", vid, *s.replication, err)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
replication, err = storage.NewReplicaPlacementFromString(stats.Replication)
|
replication, err = super_block.NewReplicaPlacementFromString(stats.Replication)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Printf("Error get volume %d replication %s : %v\n", vid, stats.Replication, err)
|
fmt.Printf("Error get volume %d replication %s : %v\n", vid, stats.Replication, err)
|
||||||
return true
|
return true
|
||||||
@ -119,7 +119,7 @@ func runBackup(cmd *Command, args []string) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if v.SuperBlock.CompactionRevision < uint16(stats.CompactRevision) {
|
if v.SuperBlock.CompactionRevision < uint16(stats.CompactRevision) {
|
||||||
if err = v.Compact(0, 0); err != nil {
|
if err = v.Compact2(30 * 1024 * 1024 * 1024); err != nil {
|
||||||
fmt.Printf("Compact Volume before synchronizing %v\n", err)
|
fmt.Printf("Compact Volume before synchronizing %v\n", err)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
@ -128,7 +128,7 @@ func runBackup(cmd *Command, args []string) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
v.SuperBlock.CompactionRevision = uint16(stats.CompactRevision)
|
v.SuperBlock.CompactionRevision = uint16(stats.CompactRevision)
|
||||||
v.DataFile().WriteAt(v.SuperBlock.Bytes(), 0)
|
v.DataBackend.WriteAt(v.SuperBlock.Bytes(), 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
datSize, _, _ := v.FileStat()
|
datSize, _, _ := v.FileStat()
|
||||||
|
@ -15,7 +15,6 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/spf13/viper"
|
|
||||||
"google.golang.org/grpc"
|
"google.golang.org/grpc"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
@ -109,7 +108,7 @@ var (
|
|||||||
func runBenchmark(cmd *Command, args []string) bool {
|
func runBenchmark(cmd *Command, args []string) bool {
|
||||||
|
|
||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
b.grpcDialOption = security.LoadClientTLS(viper.Sub("grpc"), "client")
|
b.grpcDialOption = security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
fmt.Printf("This is SeaweedFS version %s %s %s\n", util.VERSION, runtime.GOOS, runtime.GOARCH)
|
fmt.Printf("This is SeaweedFS version %s %s %s\n", util.VERSION, runtime.GOOS, runtime.GOARCH)
|
||||||
if *b.maxCpu < 1 {
|
if *b.maxCpu < 1 {
|
||||||
|
@ -17,6 +17,9 @@ var cmdCompact = &Command{
|
|||||||
The compacted .dat file is stored as .cpd file.
|
The compacted .dat file is stored as .cpd file.
|
||||||
The compacted .idx file is stored as .cpx file.
|
The compacted .idx file is stored as .cpx file.
|
||||||
|
|
||||||
|
For method=0, it compacts based on the .dat file, works if .idx file is corrupted.
|
||||||
|
For method=1, it compacts based on the .idx file, works if deletion happened but not written to .dat files.
|
||||||
|
|
||||||
`,
|
`,
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -47,7 +50,7 @@ func runCompact(cmd *Command, args []string) bool {
|
|||||||
glog.Fatalf("Compact Volume [ERROR] %s\n", err)
|
glog.Fatalf("Compact Volume [ERROR] %s\n", err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if err = v.Compact2(); err != nil {
|
if err = v.Compact2(preallocate); err != nil {
|
||||||
glog.Fatalf("Compact Volume [ERROR] %s\n", err)
|
glog.Fatalf("Compact Volume [ERROR] %s\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -4,6 +4,7 @@ import (
|
|||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
@ -12,11 +13,11 @@ import (
|
|||||||
"text/template"
|
"text/template"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"io"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/needle_map"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/types"
|
"github.com/chrislusf/seaweedfs/weed/storage/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -89,12 +90,12 @@ func printNeedle(vid needle.VolumeId, n *needle.Needle, version needle.Version,
|
|||||||
type VolumeFileScanner4Export struct {
|
type VolumeFileScanner4Export struct {
|
||||||
version needle.Version
|
version needle.Version
|
||||||
counter int
|
counter int
|
||||||
needleMap *storage.NeedleMap
|
needleMap *needle_map.MemDb
|
||||||
vid needle.VolumeId
|
vid needle.VolumeId
|
||||||
}
|
}
|
||||||
|
|
||||||
func (scanner *VolumeFileScanner4Export) VisitSuperBlock(superBlock storage.SuperBlock) error {
|
func (scanner *VolumeFileScanner4Export) VisitSuperBlock(superBlock super_block.SuperBlock) error {
|
||||||
scanner.version = superBlock.Version()
|
scanner.version = superBlock.Version
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
}
|
}
|
||||||
@ -192,15 +193,10 @@ func runExport(cmd *Command, args []string) bool {
|
|||||||
fileName = *export.collection + "_" + fileName
|
fileName = *export.collection + "_" + fileName
|
||||||
}
|
}
|
||||||
vid := needle.VolumeId(*export.volumeId)
|
vid := needle.VolumeId(*export.volumeId)
|
||||||
indexFile, err := os.OpenFile(path.Join(*export.dir, fileName+".idx"), os.O_RDONLY, 0644)
|
|
||||||
if err != nil {
|
|
||||||
glog.Fatalf("Create Volume Index [ERROR] %s\n", err)
|
|
||||||
}
|
|
||||||
defer indexFile.Close()
|
|
||||||
|
|
||||||
needleMap, err := storage.LoadBtreeNeedleMap(indexFile)
|
needleMap := needle_map.NewMemDb()
|
||||||
if err != nil {
|
if err := needleMap.LoadFromIdx(path.Join(*export.dir, fileName+".idx")); err != nil {
|
||||||
glog.Fatalf("cannot load needle map from %s: %s", indexFile.Name(), err)
|
glog.Fatalf("cannot load needle map from %s.idx: %s", fileName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
volumeFileScanner := &VolumeFileScanner4Export{
|
volumeFileScanner := &VolumeFileScanner4Export{
|
||||||
|
@ -6,14 +6,13 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"google.golang.org/grpc/reflection"
|
||||||
"github.com/spf13/viper"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/server"
|
"github.com/chrislusf/seaweedfs/weed/server"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"google.golang.org/grpc/reflection"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -145,7 +144,7 @@ func (fo *FilerOptions) startFiler() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
|
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
|
||||||
}
|
}
|
||||||
grpcS := util.NewGrpcServer(security.LoadServerTLS(viper.Sub("grpc"), "filer"))
|
grpcS := util.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.filer"))
|
||||||
filer_pb.RegisterSeaweedFilerServer(grpcS, fs)
|
filer_pb.RegisterSeaweedFilerServer(grpcS, fs)
|
||||||
reflection.Register(grpcS)
|
reflection.Register(grpcS)
|
||||||
go grpcS.Serve(grpcL)
|
go grpcS.Serve(grpcL)
|
||||||
|
@ -14,13 +14,13 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/operation"
|
"github.com/chrislusf/seaweedfs/weed/operation"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/chrislusf/seaweedfs/weed/wdclient"
|
"github.com/chrislusf/seaweedfs/weed/wdclient"
|
||||||
"github.com/spf13/viper"
|
|
||||||
"google.golang.org/grpc"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -105,7 +105,7 @@ func runCopy(cmd *Command, args []string) bool {
|
|||||||
|
|
||||||
filerGrpcPort := filerPort + 10000
|
filerGrpcPort := filerPort + 10000
|
||||||
filerGrpcAddress := fmt.Sprintf("%s:%d", filerUrl.Hostname(), filerGrpcPort)
|
filerGrpcAddress := fmt.Sprintf("%s:%d", filerUrl.Hostname(), filerGrpcPort)
|
||||||
copy.grpcDialOption = security.LoadClientTLS(viper.Sub("grpc"), "client")
|
copy.grpcDialOption = security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
@ -331,7 +331,7 @@ func (worker *FileCopyWorker) uploadFileAsOne(ctx context.Context, task FileCopy
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err := client.CreateEntry(ctx, request); err != nil {
|
if err := filer_pb.CreateEntry(ctx, client, request); err != nil {
|
||||||
return fmt.Errorf("update fh: %v", err)
|
return fmt.Errorf("update fh: %v", err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@ -378,7 +378,7 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
|
|||||||
uploadResult, err := operation.Upload(targetUrl,
|
uploadResult, err := operation.Upload(targetUrl,
|
||||||
fileName+"-"+strconv.FormatInt(i+1, 10),
|
fileName+"-"+strconv.FormatInt(i+1, 10),
|
||||||
io.NewSectionReader(f, i*chunkSize, chunkSize),
|
io.NewSectionReader(f, i*chunkSize, chunkSize),
|
||||||
false, "application/octet-stream", nil, assignResult.Auth)
|
false, "", nil, assignResult.Auth)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
uploadError = fmt.Errorf("upload data %v to %s: %v\n", fileName, targetUrl, err)
|
uploadError = fmt.Errorf("upload data %v to %s: %v\n", fileName, targetUrl, err)
|
||||||
return
|
return
|
||||||
@ -435,7 +435,7 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err := client.CreateEntry(ctx, request); err != nil {
|
if err := filer_pb.CreateEntry(ctx, client, request); err != nil {
|
||||||
return fmt.Errorf("update fh: %v", err)
|
return fmt.Errorf("update fh: %v", err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@ -466,7 +466,7 @@ func detectMimeType(f *os.File) string {
|
|||||||
|
|
||||||
func withFilerClient(ctx context.Context, filerAddress string, grpcDialOption grpc.DialOption, fn func(filer_pb.SeaweedFilerClient) error) error {
|
func withFilerClient(ctx context.Context, filerAddress string, grpcDialOption grpc.DialOption, fn func(filer_pb.SeaweedFilerClient) error) error {
|
||||||
|
|
||||||
return util.WithCachedGrpcClient(ctx, func(clientConn *grpc.ClientConn) error {
|
return util.WithCachedGrpcClient(ctx, func(ctx context.Context, clientConn *grpc.ClientConn) error {
|
||||||
client := filer_pb.NewSeaweedFilerClient(clientConn)
|
client := filer_pb.NewSeaweedFilerClient(clientConn)
|
||||||
return fn(client)
|
return fn(client)
|
||||||
}, filerAddress, grpcDialOption)
|
}, filerAddress, grpcDialOption)
|
||||||
|
@ -39,7 +39,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
|
|||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
util.LoadConfiguration("replication", true)
|
util.LoadConfiguration("replication", true)
|
||||||
util.LoadConfiguration("notification", true)
|
util.LoadConfiguration("notification", true)
|
||||||
config := viper.GetViper()
|
config := util.GetViper()
|
||||||
|
|
||||||
var notificationInput sub.NotificationInput
|
var notificationInput sub.NotificationInput
|
||||||
|
|
||||||
@ -47,8 +47,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
|
|||||||
|
|
||||||
for _, input := range sub.NotificationInputs {
|
for _, input := range sub.NotificationInputs {
|
||||||
if config.GetBool("notification." + input.GetName() + ".enabled") {
|
if config.GetBool("notification." + input.GetName() + ".enabled") {
|
||||||
viperSub := config.Sub("notification." + input.GetName())
|
if err := input.Initialize(config, "notification."+input.GetName()+"."); err != nil {
|
||||||
if err := input.Initialize(viperSub); err != nil {
|
|
||||||
glog.Fatalf("Failed to initialize notification input for %s: %+v",
|
glog.Fatalf("Failed to initialize notification input for %s: %+v",
|
||||||
input.GetName(), err)
|
input.GetName(), err)
|
||||||
}
|
}
|
||||||
@ -66,10 +65,9 @@ func runFilerReplicate(cmd *Command, args []string) bool {
|
|||||||
|
|
||||||
// avoid recursive replication
|
// avoid recursive replication
|
||||||
if config.GetBool("notification.source.filer.enabled") && config.GetBool("notification.sink.filer.enabled") {
|
if config.GetBool("notification.source.filer.enabled") && config.GetBool("notification.sink.filer.enabled") {
|
||||||
sourceConfig, sinkConfig := config.Sub("source.filer"), config.Sub("sink.filer")
|
if config.GetString("source.filer.grpcAddress") == config.GetString("sink.filer.grpcAddress") {
|
||||||
if sourceConfig.GetString("grpcAddress") == sinkConfig.GetString("grpcAddress") {
|
fromDir := config.GetString("source.filer.directory")
|
||||||
fromDir := sourceConfig.GetString("directory")
|
toDir := config.GetString("sink.filer.directory")
|
||||||
toDir := sinkConfig.GetString("directory")
|
|
||||||
if strings.HasPrefix(toDir, fromDir) {
|
if strings.HasPrefix(toDir, fromDir) {
|
||||||
glog.Fatalf("recursive replication! source directory %s includes the sink directory %s", fromDir, toDir)
|
glog.Fatalf("recursive replication! source directory %s includes the sink directory %s", fromDir, toDir)
|
||||||
}
|
}
|
||||||
@ -79,8 +77,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
|
|||||||
var dataSink sink.ReplicationSink
|
var dataSink sink.ReplicationSink
|
||||||
for _, sk := range sink.Sinks {
|
for _, sk := range sink.Sinks {
|
||||||
if config.GetBool("sink." + sk.GetName() + ".enabled") {
|
if config.GetBool("sink." + sk.GetName() + ".enabled") {
|
||||||
viperSub := config.Sub("sink." + sk.GetName())
|
if err := sk.Initialize(config, "sink."+sk.GetName()+"."); err != nil {
|
||||||
if err := sk.Initialize(viperSub); err != nil {
|
|
||||||
glog.Fatalf("Failed to initialize sink for %s: %+v",
|
glog.Fatalf("Failed to initialize sink for %s: %+v",
|
||||||
sk.GetName(), err)
|
sk.GetName(), err)
|
||||||
}
|
}
|
||||||
@ -98,7 +95,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
replicator := replication.NewReplicator(config.Sub("source.filer"), dataSink)
|
replicator := replication.NewReplicator(config, "source.filer.", dataSink)
|
||||||
|
|
||||||
for {
|
for {
|
||||||
key, m, err := notificationInput.ReceiveMessage()
|
key, m, err := notificationInput.ReceiveMessage()
|
||||||
|
@ -8,6 +8,8 @@ import (
|
|||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/needle_map"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/types"
|
"github.com/chrislusf/seaweedfs/weed/storage/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -31,11 +33,11 @@ var (
|
|||||||
|
|
||||||
type VolumeFileScanner4Fix struct {
|
type VolumeFileScanner4Fix struct {
|
||||||
version needle.Version
|
version needle.Version
|
||||||
nm *storage.NeedleMap
|
nm *needle_map.MemDb
|
||||||
}
|
}
|
||||||
|
|
||||||
func (scanner *VolumeFileScanner4Fix) VisitSuperBlock(superBlock storage.SuperBlock) error {
|
func (scanner *VolumeFileScanner4Fix) VisitSuperBlock(superBlock super_block.SuperBlock) error {
|
||||||
scanner.version = superBlock.Version()
|
scanner.version = superBlock.Version
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
}
|
}
|
||||||
@ -46,11 +48,11 @@ func (scanner *VolumeFileScanner4Fix) ReadNeedleBody() bool {
|
|||||||
func (scanner *VolumeFileScanner4Fix) VisitNeedle(n *needle.Needle, offset int64, needleHeader, needleBody []byte) error {
|
func (scanner *VolumeFileScanner4Fix) VisitNeedle(n *needle.Needle, offset int64, needleHeader, needleBody []byte) error {
|
||||||
glog.V(2).Infof("key %d offset %d size %d disk_size %d gzip %v", n.Id, offset, n.Size, n.DiskSize(scanner.version), n.IsGzipped())
|
glog.V(2).Infof("key %d offset %d size %d disk_size %d gzip %v", n.Id, offset, n.Size, n.DiskSize(scanner.version), n.IsGzipped())
|
||||||
if n.Size > 0 && n.Size != types.TombstoneFileSize {
|
if n.Size > 0 && n.Size != types.TombstoneFileSize {
|
||||||
pe := scanner.nm.Put(n.Id, types.ToOffset(offset), n.Size)
|
pe := scanner.nm.Set(n.Id, types.ToOffset(offset), n.Size)
|
||||||
glog.V(2).Infof("saved %d with error %v", n.Size, pe)
|
glog.V(2).Infof("saved %d with error %v", n.Size, pe)
|
||||||
} else {
|
} else {
|
||||||
glog.V(2).Infof("skipping deleted file ...")
|
glog.V(2).Infof("skipping deleted file ...")
|
||||||
return scanner.nm.Delete(n.Id, types.ToOffset(offset))
|
return scanner.nm.Delete(n.Id)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -66,23 +68,21 @@ func runFix(cmd *Command, args []string) bool {
|
|||||||
baseFileName = *fixVolumeCollection + "_" + baseFileName
|
baseFileName = *fixVolumeCollection + "_" + baseFileName
|
||||||
}
|
}
|
||||||
indexFileName := path.Join(*fixVolumePath, baseFileName+".idx")
|
indexFileName := path.Join(*fixVolumePath, baseFileName+".idx")
|
||||||
indexFile, err := os.OpenFile(indexFileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
|
|
||||||
if err != nil {
|
|
||||||
glog.Fatalf("Create Volume Index [ERROR] %s\n", err)
|
|
||||||
}
|
|
||||||
defer indexFile.Close()
|
|
||||||
|
|
||||||
nm := storage.NewBtreeNeedleMap(indexFile)
|
nm := needle_map.NewMemDb()
|
||||||
defer nm.Close()
|
|
||||||
|
|
||||||
vid := needle.VolumeId(*fixVolumeId)
|
vid := needle.VolumeId(*fixVolumeId)
|
||||||
scanner := &VolumeFileScanner4Fix{
|
scanner := &VolumeFileScanner4Fix{
|
||||||
nm: nm,
|
nm: nm,
|
||||||
}
|
}
|
||||||
|
|
||||||
err = storage.ScanVolumeFile(*fixVolumePath, *fixVolumeCollection, vid, storage.NeedleMapInMemory, scanner)
|
if err := storage.ScanVolumeFile(*fixVolumePath, *fixVolumeCollection, vid, storage.NeedleMapInMemory, scanner); err != nil {
|
||||||
if err != nil {
|
glog.Fatalf("scan .dat File: %v", err)
|
||||||
glog.Fatalf("Export Volume File [ERROR] %s\n", err)
|
os.Remove(indexFileName)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := nm.SaveToIdx(indexFileName); err != nil {
|
||||||
|
glog.Fatalf("save to .idx File: %v", err)
|
||||||
os.Remove(indexFileName)
|
os.Remove(indexFileName)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -8,14 +8,15 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/chrislusf/raft/protobuf"
|
"github.com/chrislusf/raft/protobuf"
|
||||||
|
"github.com/gorilla/mux"
|
||||||
|
"google.golang.org/grpc/reflection"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/master_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/master_pb"
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/server"
|
"github.com/chrislusf/seaweedfs/weed/server"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/storage/backend"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/gorilla/mux"
|
|
||||||
"github.com/spf13/viper"
|
|
||||||
"google.golang.org/grpc/reflection"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -101,6 +102,8 @@ func runMaster(cmd *Command, args []string) bool {
|
|||||||
|
|
||||||
func startMaster(masterOption MasterOptions, masterWhiteList []string) {
|
func startMaster(masterOption MasterOptions, masterWhiteList []string) {
|
||||||
|
|
||||||
|
backend.LoadConfiguration(util.GetViper())
|
||||||
|
|
||||||
myMasterAddress, peers := checkPeers(*masterOption.ip, *masterOption.port, *masterOption.peers)
|
myMasterAddress, peers := checkPeers(*masterOption.ip, *masterOption.port, *masterOption.peers)
|
||||||
|
|
||||||
r := mux.NewRouter()
|
r := mux.NewRouter()
|
||||||
@ -112,7 +115,7 @@ func startMaster(masterOption MasterOptions, masterWhiteList []string) {
|
|||||||
glog.Fatalf("Master startup error: %v", e)
|
glog.Fatalf("Master startup error: %v", e)
|
||||||
}
|
}
|
||||||
// start raftServer
|
// start raftServer
|
||||||
raftServer := weed_server.NewRaftServer(security.LoadClientTLS(viper.Sub("grpc"), "master"),
|
raftServer := weed_server.NewRaftServer(security.LoadClientTLS(util.GetViper(), "grpc.master"),
|
||||||
peers, myMasterAddress, *masterOption.metaFolder, ms.Topo, *masterOption.pulseSeconds)
|
peers, myMasterAddress, *masterOption.metaFolder, ms.Topo, *masterOption.pulseSeconds)
|
||||||
if raftServer == nil {
|
if raftServer == nil {
|
||||||
glog.Fatalf("please verify %s is writable, see https://github.com/chrislusf/seaweedfs/issues/717", *masterOption.metaFolder)
|
glog.Fatalf("please verify %s is writable, see https://github.com/chrislusf/seaweedfs/issues/717", *masterOption.metaFolder)
|
||||||
@ -126,7 +129,7 @@ func startMaster(masterOption MasterOptions, masterWhiteList []string) {
|
|||||||
glog.Fatalf("master failed to listen on grpc port %d: %v", grpcPort, err)
|
glog.Fatalf("master failed to listen on grpc port %d: %v", grpcPort, err)
|
||||||
}
|
}
|
||||||
// Create your protocol servers.
|
// Create your protocol servers.
|
||||||
grpcS := util.NewGrpcServer(security.LoadServerTLS(viper.Sub("grpc"), "master"))
|
grpcS := util.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.master"))
|
||||||
master_pb.RegisterSeaweedServer(grpcS, ms)
|
master_pb.RegisterSeaweedServer(grpcS, ms)
|
||||||
protobuf.RegisterRaftServer(grpcS, raftServer)
|
protobuf.RegisterRaftServer(grpcS, raftServer)
|
||||||
reflection.Register(grpcS)
|
reflection.Register(grpcS)
|
||||||
|
@ -10,7 +10,7 @@ type MountOptions struct {
|
|||||||
filer *string
|
filer *string
|
||||||
filerMountRootPath *string
|
filerMountRootPath *string
|
||||||
dir *string
|
dir *string
|
||||||
dirListingLimit *int
|
dirListCacheLimit *int64
|
||||||
collection *string
|
collection *string
|
||||||
replication *string
|
replication *string
|
||||||
ttlSec *int
|
ttlSec *int
|
||||||
@ -31,7 +31,7 @@ func init() {
|
|||||||
mountOptions.filer = cmdMount.Flag.String("filer", "localhost:8888", "weed filer location")
|
mountOptions.filer = cmdMount.Flag.String("filer", "localhost:8888", "weed filer location")
|
||||||
mountOptions.filerMountRootPath = cmdMount.Flag.String("filer.path", "/", "mount this remote path from filer server")
|
mountOptions.filerMountRootPath = cmdMount.Flag.String("filer.path", "/", "mount this remote path from filer server")
|
||||||
mountOptions.dir = cmdMount.Flag.String("dir", ".", "mount weed filer to this directory")
|
mountOptions.dir = cmdMount.Flag.String("dir", ".", "mount weed filer to this directory")
|
||||||
mountOptions.dirListingLimit = cmdMount.Flag.Int("dirListLimit", 100000, "limit directory listing size")
|
mountOptions.dirListCacheLimit = cmdMount.Flag.Int64("dirListCacheLimit", 1000000, "limit cache size to speed up directory long format listing")
|
||||||
mountOptions.collection = cmdMount.Flag.String("collection", "", "collection to create the files")
|
mountOptions.collection = cmdMount.Flag.String("collection", "", "collection to create the files")
|
||||||
mountOptions.replication = cmdMount.Flag.String("replication", "", "replication(e.g. 000, 001) to create to files. If empty, let filer decide.")
|
mountOptions.replication = cmdMount.Flag.String("replication", "", "replication(e.g. 000, 001) to create to files. If empty, let filer decide.")
|
||||||
mountOptions.ttlSec = cmdMount.Flag.Int("ttl", 0, "file ttl in seconds")
|
mountOptions.ttlSec = cmdMount.Flag.Int("ttl", 0, "file ttl in seconds")
|
||||||
@ -64,12 +64,12 @@ var cmdMount = &Command{
|
|||||||
func parseFilerGrpcAddress(filer string) (filerGrpcAddress string, err error) {
|
func parseFilerGrpcAddress(filer string) (filerGrpcAddress string, err error) {
|
||||||
hostnameAndPort := strings.Split(filer, ":")
|
hostnameAndPort := strings.Split(filer, ":")
|
||||||
if len(hostnameAndPort) != 2 {
|
if len(hostnameAndPort) != 2 {
|
||||||
return "", fmt.Errorf("The filer should have hostname:port format: %v", hostnameAndPort)
|
return "", fmt.Errorf("filer should have hostname:port format: %v", hostnameAndPort)
|
||||||
}
|
}
|
||||||
|
|
||||||
filerPort, parseErr := strconv.ParseUint(hostnameAndPort[1], 10, 64)
|
filerPort, parseErr := strconv.ParseUint(hostnameAndPort[1], 10, 64)
|
||||||
if parseErr != nil {
|
if parseErr != nil {
|
||||||
return "", fmt.Errorf("The filer filer port parse error: %v", parseErr)
|
return "", fmt.Errorf("filer port parse error: %v", parseErr)
|
||||||
}
|
}
|
||||||
|
|
||||||
filerGrpcPort := int(filerPort) + 10000
|
filerGrpcPort := int(filerPort) + 10000
|
||||||
|
@ -7,3 +7,7 @@ import (
|
|||||||
func osSpecificMountOptions() []fuse.MountOption {
|
func osSpecificMountOptions() []fuse.MountOption {
|
||||||
return []fuse.MountOption{}
|
return []fuse.MountOption{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func checkMountPointAvailable(dir string) bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
@ -7,3 +7,7 @@ import (
|
|||||||
func osSpecificMountOptions() []fuse.MountOption {
|
func osSpecificMountOptions() []fuse.MountOption {
|
||||||
return []fuse.MountOption{}
|
return []fuse.MountOption{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func checkMountPointAvailable(dir string) bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
@ -1,11 +1,157 @@
|
|||||||
package command
|
package command
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bufio"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/seaweedfs/fuse"
|
"github.com/seaweedfs/fuse"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
/* 36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue
|
||||||
|
(1)(2)(3) (4) (5) (6) (7) (8) (9) (10) (11)
|
||||||
|
|
||||||
|
(1) mount ID: unique identifier of the mount (may be reused after umount)
|
||||||
|
(2) parent ID: ID of parent (or of self for the top of the mount tree)
|
||||||
|
(3) major:minor: value of st_dev for files on filesystem
|
||||||
|
(4) root: root of the mount within the filesystem
|
||||||
|
(5) mount point: mount point relative to the process's root
|
||||||
|
(6) mount options: per mount options
|
||||||
|
(7) optional fields: zero or more fields of the form "tag[:value]"
|
||||||
|
(8) separator: marks the end of the optional fields
|
||||||
|
(9) filesystem type: name of filesystem of the form "type[.subtype]"
|
||||||
|
(10) mount source: filesystem specific information or "none"
|
||||||
|
(11) super options: per super block options*/
|
||||||
|
mountinfoFormat = "%d %d %d:%d %s %s %s %s"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Info reveals information about a particular mounted filesystem. This
|
||||||
|
// struct is populated from the content in the /proc/<pid>/mountinfo file.
|
||||||
|
type Info struct {
|
||||||
|
// ID is a unique identifier of the mount (may be reused after umount).
|
||||||
|
ID int
|
||||||
|
|
||||||
|
// Parent indicates the ID of the mount parent (or of self for the top of the
|
||||||
|
// mount tree).
|
||||||
|
Parent int
|
||||||
|
|
||||||
|
// Major indicates one half of the device ID which identifies the device class.
|
||||||
|
Major int
|
||||||
|
|
||||||
|
// Minor indicates one half of the device ID which identifies a specific
|
||||||
|
// instance of device.
|
||||||
|
Minor int
|
||||||
|
|
||||||
|
// Root of the mount within the filesystem.
|
||||||
|
Root string
|
||||||
|
|
||||||
|
// Mountpoint indicates the mount point relative to the process's root.
|
||||||
|
Mountpoint string
|
||||||
|
|
||||||
|
// Opts represents mount-specific options.
|
||||||
|
Opts string
|
||||||
|
|
||||||
|
// Optional represents optional fields.
|
||||||
|
Optional string
|
||||||
|
|
||||||
|
// Fstype indicates the type of filesystem, such as EXT3.
|
||||||
|
Fstype string
|
||||||
|
|
||||||
|
// Source indicates filesystem specific information or "none".
|
||||||
|
Source string
|
||||||
|
|
||||||
|
// VfsOpts represents per super block options.
|
||||||
|
VfsOpts string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mounted determines if a specified mountpoint has been mounted.
|
||||||
|
// On Linux it looks at /proc/self/mountinfo and on Solaris at mnttab.
|
||||||
|
func mounted(mountPoint string) (bool, error) {
|
||||||
|
entries, err := parseMountTable()
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Search the table for the mountPoint
|
||||||
|
for _, e := range entries {
|
||||||
|
if e.Mountpoint == mountPoint {
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse /proc/self/mountinfo because comparing Dev and ino does not work from
|
||||||
|
// bind mounts
|
||||||
|
func parseMountTable() ([]*Info, error) {
|
||||||
|
f, err := os.Open("/proc/self/mountinfo")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
return parseInfoFile(f)
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseInfoFile(r io.Reader) ([]*Info, error) {
|
||||||
|
var (
|
||||||
|
s = bufio.NewScanner(r)
|
||||||
|
out []*Info
|
||||||
|
)
|
||||||
|
|
||||||
|
for s.Scan() {
|
||||||
|
if err := s.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
p = &Info{}
|
||||||
|
text = s.Text()
|
||||||
|
optionalFields string
|
||||||
|
)
|
||||||
|
|
||||||
|
if _, err := fmt.Sscanf(text, mountinfoFormat,
|
||||||
|
&p.ID, &p.Parent, &p.Major, &p.Minor,
|
||||||
|
&p.Root, &p.Mountpoint, &p.Opts, &optionalFields); err != nil {
|
||||||
|
return nil, fmt.Errorf("Scanning '%s' failed: %s", text, err)
|
||||||
|
}
|
||||||
|
// Safe as mountinfo encodes mountpoints with spaces as \040.
|
||||||
|
index := strings.Index(text, " - ")
|
||||||
|
postSeparatorFields := strings.Fields(text[index+3:])
|
||||||
|
if len(postSeparatorFields) < 3 {
|
||||||
|
return nil, fmt.Errorf("Error found less than 3 fields post '-' in %q", text)
|
||||||
|
}
|
||||||
|
|
||||||
|
if optionalFields != "-" {
|
||||||
|
p.Optional = optionalFields
|
||||||
|
}
|
||||||
|
|
||||||
|
p.Fstype = postSeparatorFields[0]
|
||||||
|
p.Source = postSeparatorFields[1]
|
||||||
|
p.VfsOpts = strings.Join(postSeparatorFields[2:], " ")
|
||||||
|
out = append(out, p)
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
func osSpecificMountOptions() []fuse.MountOption {
|
func osSpecificMountOptions() []fuse.MountOption {
|
||||||
return []fuse.MountOption{
|
return []fuse.MountOption{
|
||||||
fuse.AllowNonEmptyMount(),
|
fuse.AllowNonEmptyMount(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func checkMountPointAvailable(dir string) bool {
|
||||||
|
mountPoint := dir
|
||||||
|
if mountPoint != "/" && strings.HasSuffix(mountPoint, "/") {
|
||||||
|
mountPoint = mountPoint[0 : len(mountPoint)-1]
|
||||||
|
}
|
||||||
|
|
||||||
|
if mounted, err := mounted(mountPoint); err != nil || mounted {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
@ -12,12 +12,11 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
|
||||||
"github.com/jacobsa/daemonize"
|
"github.com/jacobsa/daemonize"
|
||||||
"github.com/spf13/viper"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filesys"
|
"github.com/chrislusf/seaweedfs/weed/filesys"
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/seaweedfs/fuse"
|
"github.com/seaweedfs/fuse"
|
||||||
"github.com/seaweedfs/fuse/fs"
|
"github.com/seaweedfs/fuse/fs"
|
||||||
@ -43,13 +42,13 @@ func runMount(cmd *Command, args []string) bool {
|
|||||||
*mountOptions.chunkSizeLimitMB,
|
*mountOptions.chunkSizeLimitMB,
|
||||||
*mountOptions.allowOthers,
|
*mountOptions.allowOthers,
|
||||||
*mountOptions.ttlSec,
|
*mountOptions.ttlSec,
|
||||||
*mountOptions.dirListingLimit,
|
*mountOptions.dirListCacheLimit,
|
||||||
os.FileMode(umask),
|
os.FileMode(umask),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCenter string, chunkSizeLimitMB int,
|
func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCenter string, chunkSizeLimitMB int,
|
||||||
allowOthers bool, ttlSec int, dirListingLimit int, umask os.FileMode) bool {
|
allowOthers bool, ttlSec int, dirListCacheLimit int64, umask os.FileMode) bool {
|
||||||
|
|
||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
|
|
||||||
@ -88,12 +87,18 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Ensure target mount point availability
|
||||||
|
if isValid := checkMountPointAvailable(dir); !isValid {
|
||||||
|
glog.Fatalf("Expected mount to still be active, target mount point: %s, please check!", dir)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
mountName := path.Base(dir)
|
mountName := path.Base(dir)
|
||||||
|
|
||||||
options := []fuse.MountOption{
|
options := []fuse.MountOption{
|
||||||
fuse.VolumeName(mountName),
|
fuse.VolumeName(mountName),
|
||||||
fuse.FSName("SeaweedFS"),
|
fuse.FSName(filer + ":" + filerMountRootPath),
|
||||||
fuse.Subtype("SeaweedFS"),
|
fuse.Subtype("seaweedfs"),
|
||||||
fuse.NoAppleDouble(),
|
fuse.NoAppleDouble(),
|
||||||
fuse.NoAppleXattr(),
|
fuse.NoAppleXattr(),
|
||||||
fuse.NoBrowse(),
|
fuse.NoBrowse(),
|
||||||
@ -116,9 +121,9 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
|
|||||||
|
|
||||||
c, err := fuse.Mount(dir, options...)
|
c, err := fuse.Mount(dir, options...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatal(err)
|
glog.V(0).Infof("mount: %v", err)
|
||||||
daemonize.SignalOutcome(err)
|
daemonize.SignalOutcome(err)
|
||||||
return false
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
util.OnInterrupt(func() {
|
util.OnInterrupt(func() {
|
||||||
@ -128,9 +133,9 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
|
|||||||
|
|
||||||
filerGrpcAddress, err := parseFilerGrpcAddress(filer)
|
filerGrpcAddress, err := parseFilerGrpcAddress(filer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Fatal(err)
|
glog.V(0).Infof("parseFilerGrpcAddress: %v", err)
|
||||||
daemonize.SignalOutcome(err)
|
daemonize.SignalOutcome(err)
|
||||||
return false
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
mountRoot := filerMountRootPath
|
mountRoot := filerMountRootPath
|
||||||
@ -142,14 +147,14 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
|
|||||||
|
|
||||||
err = fs.Serve(c, filesys.NewSeaweedFileSystem(&filesys.Option{
|
err = fs.Serve(c, filesys.NewSeaweedFileSystem(&filesys.Option{
|
||||||
FilerGrpcAddress: filerGrpcAddress,
|
FilerGrpcAddress: filerGrpcAddress,
|
||||||
GrpcDialOption: security.LoadClientTLS(viper.Sub("grpc"), "client"),
|
GrpcDialOption: security.LoadClientTLS(util.GetViper(), "grpc.client"),
|
||||||
FilerMountRootPath: mountRoot,
|
FilerMountRootPath: mountRoot,
|
||||||
Collection: collection,
|
Collection: collection,
|
||||||
Replication: replication,
|
Replication: replication,
|
||||||
TtlSec: int32(ttlSec),
|
TtlSec: int32(ttlSec),
|
||||||
ChunkSizeLimit: int64(chunkSizeLimitMB) * 1024 * 1024,
|
ChunkSizeLimit: int64(chunkSizeLimitMB) * 1024 * 1024,
|
||||||
DataCenter: dataCenter,
|
DataCenter: dataCenter,
|
||||||
DirListingLimit: dirListingLimit,
|
DirListCacheLimit: dirListCacheLimit,
|
||||||
EntryCacheTtl: 3 * time.Second,
|
EntryCacheTtl: 3 * time.Second,
|
||||||
MountUid: uid,
|
MountUid: uid,
|
||||||
MountGid: gid,
|
MountGid: gid,
|
||||||
@ -165,8 +170,9 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
|
|||||||
// check if the mount process has an error to report
|
// check if the mount process has an error to report
|
||||||
<-c.Ready
|
<-c.Ready
|
||||||
if err := c.MountError; err != nil {
|
if err := c.MountError; err != nil {
|
||||||
glog.Fatal(err)
|
glog.V(0).Infof("mount process: %v", err)
|
||||||
daemonize.SignalOutcome(err)
|
daemonize.SignalOutcome(err)
|
||||||
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
return true
|
return true
|
||||||
|
@ -1,18 +1,17 @@
|
|||||||
package command
|
package command
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/spf13/viper"
|
|
||||||
|
|
||||||
"fmt"
|
"github.com/gorilla/mux"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/s3api"
|
"github.com/chrislusf/seaweedfs/weed/s3api"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/gorilla/mux"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -69,7 +68,7 @@ func (s3opt *S3Options) startS3Server() bool {
|
|||||||
FilerGrpcAddress: filerGrpcAddress,
|
FilerGrpcAddress: filerGrpcAddress,
|
||||||
DomainName: *s3opt.domainName,
|
DomainName: *s3opt.domainName,
|
||||||
BucketsPath: *s3opt.filerBucketsPath,
|
BucketsPath: *s3opt.filerBucketsPath,
|
||||||
GrpcDialOption: security.LoadClientTLS(viper.Sub("grpc"), "client"),
|
GrpcDialOption: security.LoadClientTLS(util.GetViper(), "grpc.client"),
|
||||||
})
|
})
|
||||||
if s3ApiServer_err != nil {
|
if s3ApiServer_err != nil {
|
||||||
glog.Fatalf("S3 API Server startup error: %v", s3ApiServer_err)
|
glog.Fatalf("S3 API Server startup error: %v", s3ApiServer_err)
|
||||||
|
@ -14,6 +14,14 @@ var cmdScaffold = &Command{
|
|||||||
Short: "generate basic configuration files",
|
Short: "generate basic configuration files",
|
||||||
Long: `Generate filer.toml with all possible configurations for you to customize.
|
Long: `Generate filer.toml with all possible configurations for you to customize.
|
||||||
|
|
||||||
|
The options can also be overwritten by environment variables.
|
||||||
|
For example, the filer.toml mysql password can be overwritten by environment variable
|
||||||
|
export WEED_MYSQL_PASSWORD=some_password
|
||||||
|
Environment variable rules:
|
||||||
|
* Prefix fix with "WEED_"
|
||||||
|
* Upppercase the reset of variable name.
|
||||||
|
* Replace '.' with '_'
|
||||||
|
|
||||||
`,
|
`,
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -59,14 +67,18 @@ const (
|
|||||||
# $HOME/.seaweedfs/filer.toml
|
# $HOME/.seaweedfs/filer.toml
|
||||||
# /etc/seaweedfs/filer.toml
|
# /etc/seaweedfs/filer.toml
|
||||||
|
|
||||||
[memory]
|
####################################################
|
||||||
# local in memory, mostly for testing purpose
|
# Customizable filer server options
|
||||||
enabled = false
|
####################################################
|
||||||
|
[filer.options]
|
||||||
|
# with http DELETE, by default the filer would check whether a folder is empty.
|
||||||
|
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
|
||||||
|
recursive_delete = false
|
||||||
|
|
||||||
[leveldb]
|
|
||||||
# local on disk, mostly for simple single-machine setup, fairly scalable
|
####################################################
|
||||||
enabled = false
|
# The following are filer store options
|
||||||
dir = "." # directory to store level db files
|
####################################################
|
||||||
|
|
||||||
[leveldb2]
|
[leveldb2]
|
||||||
# local on disk, mostly for simple single-machine setup, fairly scalable
|
# local on disk, mostly for simple single-machine setup, fairly scalable
|
||||||
@ -74,10 +86,6 @@ dir = "." # directory to store level db files
|
|||||||
enabled = true
|
enabled = true
|
||||||
dir = "." # directory to store level db files
|
dir = "." # directory to store level db files
|
||||||
|
|
||||||
####################################################
|
|
||||||
# multiple filers on shared storage, fairly scalable
|
|
||||||
####################################################
|
|
||||||
|
|
||||||
[mysql] # or tidb
|
[mysql] # or tidb
|
||||||
# CREATE TABLE IF NOT EXISTS filemeta (
|
# CREATE TABLE IF NOT EXISTS filemeta (
|
||||||
# dirhash BIGINT COMMENT 'first 64 bits of MD5 hash value of directory field',
|
# dirhash BIGINT COMMENT 'first 64 bits of MD5 hash value of directory field',
|
||||||
@ -95,6 +103,7 @@ password = ""
|
|||||||
database = "" # create or use an existing database
|
database = "" # create or use an existing database
|
||||||
connection_max_idle = 2
|
connection_max_idle = 2
|
||||||
connection_max_open = 100
|
connection_max_open = 100
|
||||||
|
interpolateParams = false
|
||||||
|
|
||||||
[postgres] # or cockroachdb
|
[postgres] # or cockroachdb
|
||||||
# CREATE TABLE IF NOT EXISTS filemeta (
|
# CREATE TABLE IF NOT EXISTS filemeta (
|
||||||
@ -144,6 +153,10 @@ addresses = [
|
|||||||
"localhost:30006",
|
"localhost:30006",
|
||||||
]
|
]
|
||||||
password = ""
|
password = ""
|
||||||
|
# allows reads from slave servers or the master, but all writes still go to the master
|
||||||
|
readOnly = true
|
||||||
|
# automatically use the closest Redis server for reads
|
||||||
|
routeByLatency = true
|
||||||
|
|
||||||
[etcd]
|
[etcd]
|
||||||
enabled = false
|
enabled = false
|
||||||
@ -346,5 +359,32 @@ scripts = """
|
|||||||
"""
|
"""
|
||||||
sleep_minutes = 17 # sleep minutes between each script execution
|
sleep_minutes = 17 # sleep minutes between each script execution
|
||||||
|
|
||||||
|
[master.filer]
|
||||||
|
default_filer_url = "http://localhost:8888/"
|
||||||
|
|
||||||
|
[master.sequencer]
|
||||||
|
type = "memory" # Choose [memory|etcd] type for storing the file id sequence
|
||||||
|
# when sequencer.type = etcd, set listen client urls of etcd cluster that store file id sequence
|
||||||
|
# example : http://127.0.0.1:2379,http://127.0.0.1:2389
|
||||||
|
sequencer_etcd_urls = "http://127.0.0.1:2379"
|
||||||
|
|
||||||
|
|
||||||
|
# configurations for tiered cloud storage
|
||||||
|
# old volumes are transparently moved to cloud for cost efficiency
|
||||||
|
[storage.backend]
|
||||||
|
[storage.backend.s3.default]
|
||||||
|
enabled = false
|
||||||
|
aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
|
||||||
|
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
|
||||||
|
region = "us-east-2"
|
||||||
|
bucket = "your_bucket_name" # an existing bucket
|
||||||
|
|
||||||
|
# create this number of logical volumes if no more writable volumes
|
||||||
|
[master.volume_growth]
|
||||||
|
count_1 = 7 # create 1 x 7 = 7 actual volumes
|
||||||
|
count_2 = 6 # create 2 x 6 = 12 actual volumes
|
||||||
|
count_3 = 3 # create 3 x 3 = 9 actual volumes
|
||||||
|
count_other = 1 # create n x 1 = n actual volumes
|
||||||
|
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
|
44
weed/command/scaffold_test.go
Normal file
44
weed/command/scaffold_test.go
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
package command
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/spf13/viper"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestReadingTomlConfiguration(t *testing.T) {
|
||||||
|
|
||||||
|
viper.SetConfigType("toml")
|
||||||
|
|
||||||
|
// any approach to require this configuration into your program.
|
||||||
|
var tomlExample = []byte(`
|
||||||
|
[database]
|
||||||
|
server = "192.168.1.1"
|
||||||
|
ports = [ 8001, 8001, 8002 ]
|
||||||
|
connection_max = 5000
|
||||||
|
enabled = true
|
||||||
|
|
||||||
|
[servers]
|
||||||
|
|
||||||
|
# You can indent as you please. Tabs or spaces. TOML don't care.
|
||||||
|
[servers.alpha]
|
||||||
|
ip = "10.0.0.1"
|
||||||
|
dc = "eqdc10"
|
||||||
|
|
||||||
|
[servers.beta]
|
||||||
|
ip = "10.0.0.2"
|
||||||
|
dc = "eqdc10"
|
||||||
|
|
||||||
|
`)
|
||||||
|
|
||||||
|
viper.ReadConfig(bytes.NewBuffer(tomlExample))
|
||||||
|
|
||||||
|
fmt.Printf("database is %v\n", viper.Get("database"))
|
||||||
|
fmt.Printf("servers is %v\n", viper.GetStringMap("servers"))
|
||||||
|
|
||||||
|
alpha := viper.Sub("servers.alpha")
|
||||||
|
|
||||||
|
fmt.Printf("alpha ip is %v\n", alpha.GetString("ip"))
|
||||||
|
}
|
@ -89,6 +89,7 @@ func init() {
|
|||||||
serverOptions.v.fixJpgOrientation = cmdServer.Flag.Bool("volume.images.fix.orientation", false, "Adjust jpg orientation when uploading.")
|
serverOptions.v.fixJpgOrientation = cmdServer.Flag.Bool("volume.images.fix.orientation", false, "Adjust jpg orientation when uploading.")
|
||||||
serverOptions.v.readRedirect = cmdServer.Flag.Bool("volume.read.redirect", true, "Redirect moved or non-local volumes.")
|
serverOptions.v.readRedirect = cmdServer.Flag.Bool("volume.read.redirect", true, "Redirect moved or non-local volumes.")
|
||||||
serverOptions.v.compactionMBPerSecond = cmdServer.Flag.Int("volume.compactionMBps", 0, "limit compaction speed in mega bytes per second")
|
serverOptions.v.compactionMBPerSecond = cmdServer.Flag.Int("volume.compactionMBps", 0, "limit compaction speed in mega bytes per second")
|
||||||
|
serverOptions.v.fileSizeLimitMB = cmdServer.Flag.Int("volume.fileSizeLimitMB", 256, "limit file size to avoid out of memory")
|
||||||
serverOptions.v.publicUrl = cmdServer.Flag.String("volume.publicUrl", "", "publicly accessible address")
|
serverOptions.v.publicUrl = cmdServer.Flag.String("volume.publicUrl", "", "publicly accessible address")
|
||||||
|
|
||||||
s3Options.filerBucketsPath = cmdServer.Flag.String("s3.filer.dir.buckets", "/buckets", "folder on filer to store all buckets")
|
s3Options.filerBucketsPath = cmdServer.Flag.String("s3.filer.dir.buckets", "/buckets", "folder on filer to store all buckets")
|
||||||
|
@ -2,14 +2,10 @@ package command
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/url"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/shell"
|
"github.com/chrislusf/seaweedfs/weed/shell"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/spf13/viper"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -34,10 +30,10 @@ var cmdShell = &Command{
|
|||||||
func runShell(command *Command, args []string) bool {
|
func runShell(command *Command, args []string) bool {
|
||||||
|
|
||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
shellOptions.GrpcDialOption = security.LoadClientTLS(viper.Sub("grpc"), "client")
|
shellOptions.GrpcDialOption = security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
var filerPwdErr error
|
var filerPwdErr error
|
||||||
shellOptions.FilerHost, shellOptions.FilerPort, shellOptions.Directory, filerPwdErr = parseFilerUrl(*shellInitialFilerUrl)
|
shellOptions.FilerHost, shellOptions.FilerPort, shellOptions.Directory, filerPwdErr = util.ParseFilerUrl(*shellInitialFilerUrl)
|
||||||
if filerPwdErr != nil {
|
if filerPwdErr != nil {
|
||||||
fmt.Printf("failed to parse url filer.url=%s : %v\n", *shellInitialFilerUrl, filerPwdErr)
|
fmt.Printf("failed to parse url filer.url=%s : %v\n", *shellInitialFilerUrl, filerPwdErr)
|
||||||
return false
|
return false
|
||||||
@ -48,22 +44,3 @@ func runShell(command *Command, args []string) bool {
|
|||||||
return true
|
return true
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseFilerUrl(entryPath string) (filerServer string, filerPort int64, path string, err error) {
|
|
||||||
if !strings.HasPrefix(entryPath, "http://") && !strings.HasPrefix(entryPath, "https://") {
|
|
||||||
entryPath = "http://" + entryPath
|
|
||||||
}
|
|
||||||
|
|
||||||
var u *url.URL
|
|
||||||
u, err = url.Parse(entryPath)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
filerServer = u.Hostname()
|
|
||||||
portString := u.Port()
|
|
||||||
if portString != "" {
|
|
||||||
filerPort, err = strconv.ParseInt(portString, 10, 32)
|
|
||||||
}
|
|
||||||
path = u.Path
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
@ -6,11 +6,9 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/operation"
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/spf13/viper"
|
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/operation"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -63,7 +61,7 @@ var cmdUpload = &Command{
|
|||||||
func runUpload(cmd *Command, args []string) bool {
|
func runUpload(cmd *Command, args []string) bool {
|
||||||
|
|
||||||
util.LoadConfiguration("security", false)
|
util.LoadConfiguration("security", false)
|
||||||
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client")
|
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
|
||||||
|
|
||||||
if len(args) == 0 {
|
if len(args) == 0 {
|
||||||
if *upload.dir == "" {
|
if *upload.dir == "" {
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
package command
|
package command
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
@ -9,15 +10,19 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/util/httpdown"
|
||||||
|
|
||||||
|
"google.golang.org/grpc/reflection"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb"
|
||||||
"github.com/chrislusf/seaweedfs/weed/server"
|
"github.com/chrislusf/seaweedfs/weed/server"
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage"
|
"github.com/chrislusf/seaweedfs/weed/storage"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"google.golang.org/grpc/reflection"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -44,6 +49,7 @@ type VolumeServerOptions struct {
|
|||||||
cpuProfile *string
|
cpuProfile *string
|
||||||
memProfile *string
|
memProfile *string
|
||||||
compactionMBPerSecond *int
|
compactionMBPerSecond *int
|
||||||
|
fileSizeLimitMB *int
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@ -64,6 +70,7 @@ func init() {
|
|||||||
v.cpuProfile = cmdVolume.Flag.String("cpuprofile", "", "cpu profile output file")
|
v.cpuProfile = cmdVolume.Flag.String("cpuprofile", "", "cpu profile output file")
|
||||||
v.memProfile = cmdVolume.Flag.String("memprofile", "", "memory profile output file")
|
v.memProfile = cmdVolume.Flag.String("memprofile", "", "memory profile output file")
|
||||||
v.compactionMBPerSecond = cmdVolume.Flag.Int("compactionMBps", 0, "limit background compaction or copying speed in mega bytes per second")
|
v.compactionMBPerSecond = cmdVolume.Flag.Int("compactionMBps", 0, "limit background compaction or copying speed in mega bytes per second")
|
||||||
|
v.fileSizeLimitMB = cmdVolume.Flag.Int("fileSizeLimitMB", 256, "limit file size to avoid out of memory")
|
||||||
}
|
}
|
||||||
|
|
||||||
var cmdVolume = &Command{
|
var cmdVolume = &Command{
|
||||||
@ -94,7 +101,7 @@ func runVolume(cmd *Command, args []string) bool {
|
|||||||
|
|
||||||
func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, volumeWhiteListOption string) {
|
func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, volumeWhiteListOption string) {
|
||||||
|
|
||||||
//Set multiple folders and each folder's max volume count limit'
|
// Set multiple folders and each folder's max volume count limit'
|
||||||
v.folders = strings.Split(volumeFolders, ",")
|
v.folders = strings.Split(volumeFolders, ",")
|
||||||
maxCountStrings := strings.Split(maxVolumeCounts, ",")
|
maxCountStrings := strings.Split(maxVolumeCounts, ",")
|
||||||
for _, maxString := range maxCountStrings {
|
for _, maxString := range maxCountStrings {
|
||||||
@ -113,7 +120,7 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
//security related white list configuration
|
// security related white list configuration
|
||||||
if volumeWhiteListOption != "" {
|
if volumeWhiteListOption != "" {
|
||||||
v.whiteList = strings.Split(volumeWhiteListOption, ",")
|
v.whiteList = strings.Split(volumeWhiteListOption, ",")
|
||||||
}
|
}
|
||||||
@ -128,11 +135,10 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
|
|||||||
if *v.publicUrl == "" {
|
if *v.publicUrl == "" {
|
||||||
*v.publicUrl = *v.ip + ":" + strconv.Itoa(*v.publicPort)
|
*v.publicUrl = *v.ip + ":" + strconv.Itoa(*v.publicPort)
|
||||||
}
|
}
|
||||||
isSeperatedPublicPort := *v.publicPort != *v.port
|
|
||||||
|
|
||||||
volumeMux := http.NewServeMux()
|
volumeMux := http.NewServeMux()
|
||||||
publicVolumeMux := volumeMux
|
publicVolumeMux := volumeMux
|
||||||
if isSeperatedPublicPort {
|
if v.isSeparatedPublicPort() {
|
||||||
publicVolumeMux = http.NewServeMux()
|
publicVolumeMux = http.NewServeMux()
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -156,53 +162,134 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
|
|||||||
v.whiteList,
|
v.whiteList,
|
||||||
*v.fixJpgOrientation, *v.readRedirect,
|
*v.fixJpgOrientation, *v.readRedirect,
|
||||||
*v.compactionMBPerSecond,
|
*v.compactionMBPerSecond,
|
||||||
|
*v.fileSizeLimitMB,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// starting grpc server
|
||||||
|
grpcS := v.startGrpcService(volumeServer)
|
||||||
|
|
||||||
|
// starting public http server
|
||||||
|
var publicHttpDown httpdown.Server
|
||||||
|
if v.isSeparatedPublicPort() {
|
||||||
|
publicHttpDown = v.startPublicHttpService(publicVolumeMux)
|
||||||
|
if nil == publicHttpDown {
|
||||||
|
glog.Fatalf("start public http service failed")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// starting the cluster http server
|
||||||
|
clusterHttpServer := v.startClusterHttpService(volumeMux)
|
||||||
|
|
||||||
|
stopChain := make(chan struct{})
|
||||||
|
util.OnInterrupt(func() {
|
||||||
|
fmt.Println("volume server has be killed")
|
||||||
|
var startTime time.Time
|
||||||
|
|
||||||
|
// firstly, stop the public http service to prevent from receiving new user request
|
||||||
|
if nil != publicHttpDown {
|
||||||
|
startTime = time.Now()
|
||||||
|
if err := publicHttpDown.Stop(); err != nil {
|
||||||
|
glog.Warningf("stop the public http server failed, %v", err)
|
||||||
|
}
|
||||||
|
delta := time.Now().Sub(startTime).Nanoseconds() / 1e6
|
||||||
|
glog.V(0).Infof("stop public http server, elapsed %dms", delta)
|
||||||
|
}
|
||||||
|
|
||||||
|
startTime = time.Now()
|
||||||
|
if err := clusterHttpServer.Stop(); err != nil {
|
||||||
|
glog.Warningf("stop the cluster http server failed, %v", err)
|
||||||
|
}
|
||||||
|
delta := time.Now().Sub(startTime).Nanoseconds() / 1e6
|
||||||
|
glog.V(0).Infof("graceful stop cluster http server, elapsed [%d]", delta)
|
||||||
|
|
||||||
|
startTime = time.Now()
|
||||||
|
grpcS.GracefulStop()
|
||||||
|
delta = time.Now().Sub(startTime).Nanoseconds() / 1e6
|
||||||
|
glog.V(0).Infof("graceful stop gRPC, elapsed [%d]", delta)
|
||||||
|
|
||||||
|
startTime = time.Now()
|
||||||
|
volumeServer.Shutdown()
|
||||||
|
delta = time.Now().Sub(startTime).Nanoseconds() / 1e6
|
||||||
|
glog.V(0).Infof("stop volume server, elapsed [%d]", delta)
|
||||||
|
|
||||||
|
pprof.StopCPUProfile()
|
||||||
|
|
||||||
|
close(stopChain) // notify exit
|
||||||
|
})
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-stopChain:
|
||||||
|
}
|
||||||
|
glog.Warningf("the volume server exit.")
|
||||||
|
}
|
||||||
|
|
||||||
|
// check whether configure the public port
|
||||||
|
func (v VolumeServerOptions) isSeparatedPublicPort() bool {
|
||||||
|
return *v.publicPort != *v.port
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v VolumeServerOptions) startGrpcService(vs volume_server_pb.VolumeServerServer) *grpc.Server {
|
||||||
|
grpcPort := *v.port + 10000
|
||||||
|
grpcL, err := util.NewListener(*v.bindIp+":"+strconv.Itoa(grpcPort), 0)
|
||||||
|
if err != nil {
|
||||||
|
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
|
||||||
|
}
|
||||||
|
grpcS := util.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.volume"))
|
||||||
|
volume_server_pb.RegisterVolumeServerServer(grpcS, vs)
|
||||||
|
reflection.Register(grpcS)
|
||||||
|
go func() {
|
||||||
|
if err := grpcS.Serve(grpcL); err != nil {
|
||||||
|
glog.Fatalf("start gRPC service failed, %s", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
return grpcS
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v VolumeServerOptions) startPublicHttpService(handler http.Handler) httpdown.Server {
|
||||||
|
publicListeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.publicPort)
|
||||||
|
glog.V(0).Infoln("Start Seaweed volume server", util.VERSION, "public at", publicListeningAddress)
|
||||||
|
publicListener, e := util.NewListener(publicListeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
|
||||||
|
if e != nil {
|
||||||
|
glog.Fatalf("Volume server listener error:%v", e)
|
||||||
|
}
|
||||||
|
|
||||||
|
pubHttp := httpdown.HTTP{StopTimeout: 5 * time.Minute, KillTimeout: 5 * time.Minute}
|
||||||
|
publicHttpDown := pubHttp.Serve(&http.Server{Handler: handler}, publicListener)
|
||||||
|
go func() {
|
||||||
|
if err := publicHttpDown.Wait(); err != nil {
|
||||||
|
glog.Errorf("public http down wait failed, %v", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
return publicHttpDown
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v VolumeServerOptions) startClusterHttpService(handler http.Handler) httpdown.Server {
|
||||||
|
var (
|
||||||
|
certFile, keyFile string
|
||||||
|
)
|
||||||
|
if viper.GetString("https.volume.key") != "" {
|
||||||
|
certFile = viper.GetString("https.volume.cert")
|
||||||
|
keyFile = viper.GetString("https.volume.key")
|
||||||
|
}
|
||||||
|
|
||||||
listeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.port)
|
listeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.port)
|
||||||
glog.V(0).Infof("Start Seaweed volume server %s at %s", util.VERSION, listeningAddress)
|
glog.V(0).Infof("Start Seaweed volume server %s at %s", util.VERSION, listeningAddress)
|
||||||
listener, e := util.NewListener(listeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
|
listener, e := util.NewListener(listeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
|
||||||
if e != nil {
|
if e != nil {
|
||||||
glog.Fatalf("Volume server listener error:%v", e)
|
glog.Fatalf("Volume server listener error:%v", e)
|
||||||
}
|
}
|
||||||
if isSeperatedPublicPort {
|
|
||||||
publicListeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.publicPort)
|
|
||||||
glog.V(0).Infoln("Start Seaweed volume server", util.VERSION, "public at", publicListeningAddress)
|
|
||||||
publicListener, e := util.NewListener(publicListeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
|
|
||||||
if e != nil {
|
|
||||||
glog.Fatalf("Volume server listener error:%v", e)
|
|
||||||
}
|
|
||||||
go func() {
|
|
||||||
if e := http.Serve(publicListener, publicVolumeMux); e != nil {
|
|
||||||
glog.Fatalf("Volume server fail to serve public: %v", e)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
util.OnInterrupt(func() {
|
httpDown := httpdown.HTTP{
|
||||||
volumeServer.Shutdown()
|
KillTimeout: 5 * time.Minute,
|
||||||
pprof.StopCPUProfile()
|
StopTimeout: 5 * time.Minute,
|
||||||
})
|
CertFile: certFile,
|
||||||
|
KeyFile: keyFile}
|
||||||
// starting grpc server
|
clusterHttpServer := httpDown.Serve(&http.Server{Handler: handler}, listener)
|
||||||
grpcPort := *v.port + 10000
|
go func() {
|
||||||
grpcL, err := util.NewListener(*v.bindIp+":"+strconv.Itoa(grpcPort), 0)
|
if e := clusterHttpServer.Wait(); e != nil {
|
||||||
if err != nil {
|
|
||||||
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
|
|
||||||
}
|
|
||||||
grpcS := util.NewGrpcServer(security.LoadServerTLS(viper.Sub("grpc"), "volume"))
|
|
||||||
volume_server_pb.RegisterVolumeServerServer(grpcS, volumeServer)
|
|
||||||
reflection.Register(grpcS)
|
|
||||||
go grpcS.Serve(grpcL)
|
|
||||||
|
|
||||||
if viper.GetString("https.volume.key") != "" {
|
|
||||||
if e := http.ServeTLS(listener, volumeMux,
|
|
||||||
viper.GetString("https.volume.cert"), viper.GetString("https.volume.key")); e != nil {
|
|
||||||
glog.Fatalf("Volume server fail to serve: %v", e)
|
glog.Fatalf("Volume server fail to serve: %v", e)
|
||||||
}
|
}
|
||||||
} else {
|
}()
|
||||||
if e := http.Serve(listener, volumeMux); e != nil {
|
return clusterHttpServer
|
||||||
glog.Fatalf("Volume server fail to serve: %v", e)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -11,7 +11,6 @@ import (
|
|||||||
"github.com/chrislusf/seaweedfs/weed/security"
|
"github.com/chrislusf/seaweedfs/weed/security"
|
||||||
"github.com/chrislusf/seaweedfs/weed/server"
|
"github.com/chrislusf/seaweedfs/weed/server"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/spf13/viper"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -75,7 +74,7 @@ func (wo *WebDavOption) startWebDav() bool {
|
|||||||
ws, webdavServer_err := weed_server.NewWebDavServer(&weed_server.WebDavOption{
|
ws, webdavServer_err := weed_server.NewWebDavServer(&weed_server.WebDavOption{
|
||||||
Filer: *wo.filer,
|
Filer: *wo.filer,
|
||||||
FilerGrpcAddress: filerGrpcAddress,
|
FilerGrpcAddress: filerGrpcAddress,
|
||||||
GrpcDialOption: security.LoadClientTLS(viper.Sub("grpc"), "client"),
|
GrpcDialOption: security.LoadClientTLS(util.GetViper(), "grpc.client"),
|
||||||
Collection: *wo.collection,
|
Collection: *wo.collection,
|
||||||
Uid: uid,
|
Uid: uid,
|
||||||
Gid: gid,
|
Gid: gid,
|
||||||
|
@ -7,16 +7,18 @@ import (
|
|||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
)
|
)
|
||||||
|
|
||||||
type AbstractSqlStore struct {
|
type AbstractSqlStore struct {
|
||||||
DB *sql.DB
|
DB *sql.DB
|
||||||
SqlInsert string
|
SqlInsert string
|
||||||
SqlUpdate string
|
SqlUpdate string
|
||||||
SqlFind string
|
SqlFind string
|
||||||
SqlDelete string
|
SqlDelete string
|
||||||
SqlListExclusive string
|
SqlDeleteFolderChildren string
|
||||||
SqlListInclusive string
|
SqlListExclusive string
|
||||||
|
SqlListInclusive string
|
||||||
}
|
}
|
||||||
|
|
||||||
type TxOrDB interface {
|
type TxOrDB interface {
|
||||||
@ -64,7 +66,7 @@ func (store *AbstractSqlStore) InsertEntry(ctx context.Context, entry *filer2.En
|
|||||||
return fmt.Errorf("encode %s: %s", entry.FullPath, err)
|
return fmt.Errorf("encode %s: %s", entry.FullPath, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlInsert, hashToLong(dir), name, dir, meta)
|
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlInsert, util.HashStringToLong(dir), name, dir, meta)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("insert %s: %s", entry.FullPath, err)
|
return fmt.Errorf("insert %s: %s", entry.FullPath, err)
|
||||||
}
|
}
|
||||||
@ -84,7 +86,7 @@ func (store *AbstractSqlStore) UpdateEntry(ctx context.Context, entry *filer2.En
|
|||||||
return fmt.Errorf("encode %s: %s", entry.FullPath, err)
|
return fmt.Errorf("encode %s: %s", entry.FullPath, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlUpdate, meta, hashToLong(dir), name, dir)
|
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlUpdate, meta, util.HashStringToLong(dir), name, dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("update %s: %s", entry.FullPath, err)
|
return fmt.Errorf("update %s: %s", entry.FullPath, err)
|
||||||
}
|
}
|
||||||
@ -99,7 +101,7 @@ func (store *AbstractSqlStore) UpdateEntry(ctx context.Context, entry *filer2.En
|
|||||||
func (store *AbstractSqlStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (*filer2.Entry, error) {
|
func (store *AbstractSqlStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (*filer2.Entry, error) {
|
||||||
|
|
||||||
dir, name := fullpath.DirAndName()
|
dir, name := fullpath.DirAndName()
|
||||||
row := store.getTxOrDB(ctx).QueryRowContext(ctx, store.SqlFind, hashToLong(dir), name, dir)
|
row := store.getTxOrDB(ctx).QueryRowContext(ctx, store.SqlFind, util.HashStringToLong(dir), name, dir)
|
||||||
var data []byte
|
var data []byte
|
||||||
if err := row.Scan(&data); err != nil {
|
if err := row.Scan(&data); err != nil {
|
||||||
return nil, filer2.ErrNotFound
|
return nil, filer2.ErrNotFound
|
||||||
@ -119,7 +121,7 @@ func (store *AbstractSqlStore) DeleteEntry(ctx context.Context, fullpath filer2.
|
|||||||
|
|
||||||
dir, name := fullpath.DirAndName()
|
dir, name := fullpath.DirAndName()
|
||||||
|
|
||||||
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlDelete, hashToLong(dir), name, dir)
|
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlDelete, util.HashStringToLong(dir), name, dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("delete %s: %s", fullpath, err)
|
return fmt.Errorf("delete %s: %s", fullpath, err)
|
||||||
}
|
}
|
||||||
@ -132,6 +134,21 @@ func (store *AbstractSqlStore) DeleteEntry(ctx context.Context, fullpath filer2.
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *AbstractSqlStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) error {
|
||||||
|
|
||||||
|
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlDeleteFolderChildren, util.HashStringToLong(string(fullpath)), fullpath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("deleteFolderChildren %s: %s", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = res.RowsAffected()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("deleteFolderChildren %s but no rows affected: %s", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *AbstractSqlStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int) (entries []*filer2.Entry, err error) {
|
func (store *AbstractSqlStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
|
||||||
sqlText := store.SqlListExclusive
|
sqlText := store.SqlListExclusive
|
||||||
@ -139,7 +156,7 @@ func (store *AbstractSqlStore) ListDirectoryEntries(ctx context.Context, fullpat
|
|||||||
sqlText = store.SqlListInclusive
|
sqlText = store.SqlListInclusive
|
||||||
}
|
}
|
||||||
|
|
||||||
rows, err := store.getTxOrDB(ctx).QueryContext(ctx, sqlText, hashToLong(string(fullpath)), startFileName, string(fullpath), limit)
|
rows, err := store.getTxOrDB(ctx).QueryContext(ctx, sqlText, util.HashStringToLong(string(fullpath)), startFileName, string(fullpath), limit)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("list %s : %v", fullpath, err)
|
return nil, fmt.Errorf("list %s : %v", fullpath, err)
|
||||||
}
|
}
|
||||||
|
@ -1,32 +0,0 @@
|
|||||||
package abstract_sql
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/md5"
|
|
||||||
"io"
|
|
||||||
)
|
|
||||||
|
|
||||||
// returns a 64 bit big int
|
|
||||||
func hashToLong(dir string) (v int64) {
|
|
||||||
h := md5.New()
|
|
||||||
io.WriteString(h, dir)
|
|
||||||
|
|
||||||
b := h.Sum(nil)
|
|
||||||
|
|
||||||
v += int64(b[0])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[1])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[2])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[3])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[4])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[5])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[6])
|
|
||||||
v <<= 8
|
|
||||||
v += int64(b[7])
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
@ -22,10 +22,10 @@ func (store *CassandraStore) GetName() string {
|
|||||||
return "cassandra"
|
return "cassandra"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *CassandraStore) Initialize(configuration util.Configuration) (err error) {
|
func (store *CassandraStore) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
return store.initialize(
|
return store.initialize(
|
||||||
configuration.GetString("keyspace"),
|
configuration.GetString(prefix+"keyspace"),
|
||||||
configuration.GetStringSlice("hosts"),
|
configuration.GetStringSlice(prefix+"hosts"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -112,6 +112,17 @@ func (store *CassandraStore) DeleteEntry(ctx context.Context, fullpath filer2.Fu
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *CassandraStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) error {
|
||||||
|
|
||||||
|
if err := store.session.Query(
|
||||||
|
"DELETE FROM filemeta WHERE directory=?",
|
||||||
|
fullpath).Exec(); err != nil {
|
||||||
|
return fmt.Errorf("delete %s : %v", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *CassandraStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
func (store *CassandraStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
||||||
limit int) (entries []*filer2.Entry, err error) {
|
limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
|
||||||
|
@ -17,8 +17,7 @@ func (f *Filer) LoadConfiguration(config *viper.Viper) {
|
|||||||
|
|
||||||
for _, store := range Stores {
|
for _, store := range Stores {
|
||||||
if config.GetBool(store.GetName() + ".enabled") {
|
if config.GetBool(store.GetName() + ".enabled") {
|
||||||
viperSub := config.Sub(store.GetName())
|
if err := store.Initialize(config, store.GetName()+"."); err != nil {
|
||||||
if err := store.Initialize(viperSub); err != nil {
|
|
||||||
glog.Fatalf("Failed to initialize store for %s: %+v",
|
glog.Fatalf("Failed to initialize store for %s: %+v",
|
||||||
store.GetName(), err)
|
store.GetName(), err)
|
||||||
}
|
}
|
||||||
|
@ -30,6 +30,7 @@ type Entry struct {
|
|||||||
FullPath
|
FullPath
|
||||||
|
|
||||||
Attr
|
Attr
|
||||||
|
Extended map[string][]byte
|
||||||
|
|
||||||
// the following is for files
|
// the following is for files
|
||||||
Chunks []*filer_pb.FileChunk `json:"chunks,omitempty"`
|
Chunks []*filer_pb.FileChunk `json:"chunks,omitempty"`
|
||||||
@ -56,6 +57,7 @@ func (entry *Entry) ToProtoEntry() *filer_pb.Entry {
|
|||||||
IsDirectory: entry.IsDirectory(),
|
IsDirectory: entry.IsDirectory(),
|
||||||
Attributes: EntryAttributeToPb(entry),
|
Attributes: EntryAttributeToPb(entry),
|
||||||
Chunks: entry.Chunks,
|
Chunks: entry.Chunks,
|
||||||
|
Extended: entry.Extended,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,18 +1,21 @@
|
|||||||
package filer2
|
package filer2
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"fmt"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
|
||||||
"github.com/golang/protobuf/proto"
|
"github.com/golang/protobuf/proto"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (entry *Entry) EncodeAttributesAndChunks() ([]byte, error) {
|
func (entry *Entry) EncodeAttributesAndChunks() ([]byte, error) {
|
||||||
message := &filer_pb.Entry{
|
message := &filer_pb.Entry{
|
||||||
Attributes: EntryAttributeToPb(entry),
|
Attributes: EntryAttributeToPb(entry),
|
||||||
Chunks: entry.Chunks,
|
Chunks: entry.Chunks,
|
||||||
|
Extended: entry.Extended,
|
||||||
}
|
}
|
||||||
return proto.Marshal(message)
|
return proto.Marshal(message)
|
||||||
}
|
}
|
||||||
@ -27,6 +30,8 @@ func (entry *Entry) DecodeAttributesAndChunks(blob []byte) error {
|
|||||||
|
|
||||||
entry.Attr = PbToEntryAttribute(message.Attributes)
|
entry.Attr = PbToEntryAttribute(message.Attributes)
|
||||||
|
|
||||||
|
entry.Extended = message.Extended
|
||||||
|
|
||||||
entry.Chunks = message.Chunks
|
entry.Chunks = message.Chunks
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@ -84,6 +89,10 @@ func EqualEntry(a, b *Entry) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !eq(a.Extended, b.Extended) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
for i := 0; i < len(a.Chunks); i++ {
|
for i := 0; i < len(a.Chunks); i++ {
|
||||||
if !proto.Equal(a.Chunks[i], b.Chunks[i]) {
|
if !proto.Equal(a.Chunks[i], b.Chunks[i]) {
|
||||||
return false
|
return false
|
||||||
@ -91,3 +100,17 @@ func EqualEntry(a, b *Entry) bool {
|
|||||||
}
|
}
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func eq(a, b map[string][]byte) bool {
|
||||||
|
if len(a) != len(b) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
for k, v := range a {
|
||||||
|
if w, ok := b[k]; !ok || !bytes.Equal(v, w) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
@ -28,13 +28,13 @@ func (store *EtcdStore) GetName() string {
|
|||||||
return "etcd"
|
return "etcd"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *EtcdStore) Initialize(configuration weed_util.Configuration) (err error) {
|
func (store *EtcdStore) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
|
||||||
servers := configuration.GetString("servers")
|
servers := configuration.GetString(prefix + "servers")
|
||||||
if servers == "" {
|
if servers == "" {
|
||||||
servers = "localhost:2379"
|
servers = "localhost:2379"
|
||||||
}
|
}
|
||||||
|
|
||||||
timeout := configuration.GetString("timeout")
|
timeout := configuration.GetString(prefix + "timeout")
|
||||||
if timeout == "" {
|
if timeout == "" {
|
||||||
timeout = "3s"
|
timeout = "3s"
|
||||||
}
|
}
|
||||||
@ -123,6 +123,16 @@ func (store *EtcdStore) DeleteEntry(ctx context.Context, fullpath filer2.FullPat
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *EtcdStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
directoryPrefix := genDirectoryKeyPrefix(fullpath, "")
|
||||||
|
|
||||||
|
if _, err := store.client.Delete(ctx, string(directoryPrefix), clientv3.WithPrefix()); err != nil {
|
||||||
|
return fmt.Errorf("deleteFolderChildren %s : %v", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *EtcdStore) ListDirectoryEntries(
|
func (store *EtcdStore) ListDirectoryEntries(
|
||||||
ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int,
|
ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int,
|
||||||
) (entries []*filer2.Entry, err error) {
|
) (entries []*filer2.Entry, err error) {
|
||||||
|
@ -331,6 +331,42 @@ func TestChunksReading(t *testing.T) {
|
|||||||
{Offset: 0, Size: 100, FileId: "asdf", LogicOffset: 100},
|
{Offset: 0, Size: 100, FileId: "asdf", LogicOffset: 100},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
// case 8: edge cases
|
||||||
|
{
|
||||||
|
Chunks: []*filer_pb.FileChunk{
|
||||||
|
{Offset: 0, Size: 100, FileId: "abc", Mtime: 123},
|
||||||
|
{Offset: 90, Size: 200, FileId: "asdf", Mtime: 134},
|
||||||
|
{Offset: 190, Size: 300, FileId: "fsad", Mtime: 353},
|
||||||
|
},
|
||||||
|
Offset: 0,
|
||||||
|
Size: 300,
|
||||||
|
Expected: []*ChunkView{
|
||||||
|
{Offset: 0, Size: 90, FileId: "abc", LogicOffset: 0},
|
||||||
|
{Offset: 0, Size: 100, FileId: "asdf", LogicOffset: 90},
|
||||||
|
{Offset: 0, Size: 110, FileId: "fsad", LogicOffset: 190},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// case 9: edge cases
|
||||||
|
{
|
||||||
|
Chunks: []*filer_pb.FileChunk{
|
||||||
|
{Offset: 0, Size: 43175947, FileId: "2,111fc2cbfac1", Mtime: 1},
|
||||||
|
{Offset: 43175936, Size: 52981771 - 43175936, FileId: "2,112a36ea7f85", Mtime: 2},
|
||||||
|
{Offset: 52981760, Size: 72564747 - 52981760, FileId: "4,112d5f31c5e7", Mtime: 3},
|
||||||
|
{Offset: 72564736, Size: 133255179 - 72564736, FileId: "1,113245f0cdb6", Mtime: 4},
|
||||||
|
{Offset: 133255168, Size: 137269259 - 133255168, FileId: "3,1141a70733b5", Mtime: 5},
|
||||||
|
{Offset: 137269248, Size: 153578836 - 137269248, FileId: "1,114201d5bbdb", Mtime: 6},
|
||||||
|
},
|
||||||
|
Offset: 0,
|
||||||
|
Size: 153578836,
|
||||||
|
Expected: []*ChunkView{
|
||||||
|
{Offset: 0, Size: 43175936, FileId: "2,111fc2cbfac1", LogicOffset: 0},
|
||||||
|
{Offset: 0, Size: 52981760 - 43175936, FileId: "2,112a36ea7f85", LogicOffset: 43175936},
|
||||||
|
{Offset: 0, Size: 72564736 - 52981760, FileId: "4,112d5f31c5e7", LogicOffset: 52981760},
|
||||||
|
{Offset: 0, Size: 133255168 - 72564736, FileId: "1,113245f0cdb6", LogicOffset: 72564736},
|
||||||
|
{Offset: 0, Size: 137269248 - 133255168, FileId: "3,1141a70733b5", LogicOffset: 133255168},
|
||||||
|
{Offset: 0, Size: 153578836 - 137269248, FileId: "1,114201d5bbdb", LogicOffset: 137269248},
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, testcase := range testcases {
|
for i, testcase := range testcases {
|
||||||
|
@ -3,18 +3,21 @@ package filer2
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"google.golang.org/grpc"
|
|
||||||
"math"
|
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
|
||||||
|
"github.com/karlseguin/ccache"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/wdclient"
|
"github.com/chrislusf/seaweedfs/weed/wdclient"
|
||||||
"github.com/karlseguin/ccache"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const PaginationSize = 1024 * 256
|
||||||
|
|
||||||
var (
|
var (
|
||||||
OS_UID = uint32(os.Getuid())
|
OS_UID = uint32(os.Getuid())
|
||||||
OS_GID = uint32(os.Getgid())
|
OS_GID = uint32(os.Getgid())
|
||||||
@ -32,7 +35,7 @@ func NewFiler(masters []string, grpcDialOption grpc.DialOption) *Filer {
|
|||||||
f := &Filer{
|
f := &Filer{
|
||||||
directoryCache: ccache.New(ccache.Configure().MaxSize(1000).ItemsToPrune(100)),
|
directoryCache: ccache.New(ccache.Configure().MaxSize(1000).ItemsToPrune(100)),
|
||||||
MasterClient: wdclient.NewMasterClient(context.Background(), grpcDialOption, "filer", masters),
|
MasterClient: wdclient.NewMasterClient(context.Background(), grpcDialOption, "filer", masters),
|
||||||
fileIdDeletionChan: make(chan string, 4096),
|
fileIdDeletionChan: make(chan string, PaginationSize),
|
||||||
GrpcDialOption: grpcDialOption,
|
GrpcDialOption: grpcDialOption,
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -69,7 +72,7 @@ func (f *Filer) RollbackTransaction(ctx context.Context) error {
|
|||||||
return f.store.RollbackTransaction(ctx)
|
return f.store.RollbackTransaction(ctx)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
func (f *Filer) CreateEntry(ctx context.Context, entry *Entry, o_excl bool) error {
|
||||||
|
|
||||||
if string(entry.FullPath) == "/" {
|
if string(entry.FullPath) == "/" {
|
||||||
return nil
|
return nil
|
||||||
@ -93,7 +96,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
|||||||
glog.V(4).Infof("find uncached directory: %s", dirPath)
|
glog.V(4).Infof("find uncached directory: %s", dirPath)
|
||||||
dirEntry, _ = f.FindEntry(ctx, FullPath(dirPath))
|
dirEntry, _ = f.FindEntry(ctx, FullPath(dirPath))
|
||||||
} else {
|
} else {
|
||||||
glog.V(4).Infof("found cached directory: %s", dirPath)
|
// glog.V(4).Infof("found cached directory: %s", dirPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
// no such existing directory
|
// no such existing directory
|
||||||
@ -117,6 +120,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
|||||||
mkdirErr := f.store.InsertEntry(ctx, dirEntry)
|
mkdirErr := f.store.InsertEntry(ctx, dirEntry)
|
||||||
if mkdirErr != nil {
|
if mkdirErr != nil {
|
||||||
if _, err := f.FindEntry(ctx, FullPath(dirPath)); err == ErrNotFound {
|
if _, err := f.FindEntry(ctx, FullPath(dirPath)); err == ErrNotFound {
|
||||||
|
glog.V(3).Infof("mkdir %s: %v", dirPath, mkdirErr)
|
||||||
return fmt.Errorf("mkdir %s: %v", dirPath, mkdirErr)
|
return fmt.Errorf("mkdir %s: %v", dirPath, mkdirErr)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@ -124,6 +128,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
} else if !dirEntry.IsDirectory() {
|
} else if !dirEntry.IsDirectory() {
|
||||||
|
glog.Errorf("CreateEntry %s: %s should be a directory", entry.FullPath, dirPath)
|
||||||
return fmt.Errorf("%s is a file", dirPath)
|
return fmt.Errorf("%s is a file", dirPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -138,6 +143,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if lastDirectoryEntry == nil {
|
if lastDirectoryEntry == nil {
|
||||||
|
glog.Errorf("CreateEntry %s: lastDirectoryEntry is nil", entry.FullPath)
|
||||||
return fmt.Errorf("parent folder not found: %v", entry.FullPath)
|
return fmt.Errorf("parent folder not found: %v", entry.FullPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -151,12 +157,17 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
|||||||
|
|
||||||
oldEntry, _ := f.FindEntry(ctx, entry.FullPath)
|
oldEntry, _ := f.FindEntry(ctx, entry.FullPath)
|
||||||
|
|
||||||
|
glog.V(4).Infof("CreateEntry %s: old entry: %v exclusive:%v", entry.FullPath, oldEntry, o_excl)
|
||||||
if oldEntry == nil {
|
if oldEntry == nil {
|
||||||
if err := f.store.InsertEntry(ctx, entry); err != nil {
|
if err := f.store.InsertEntry(ctx, entry); err != nil {
|
||||||
glog.Errorf("insert entry %s: %v", entry.FullPath, err)
|
glog.Errorf("insert entry %s: %v", entry.FullPath, err)
|
||||||
return fmt.Errorf("insert entry %s: %v", entry.FullPath, err)
|
return fmt.Errorf("insert entry %s: %v", entry.FullPath, err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
if o_excl {
|
||||||
|
glog.V(3).Infof("EEXIST: entry %s already exists", entry.FullPath)
|
||||||
|
return fmt.Errorf("EEXIST: entry %s already exists", entry.FullPath)
|
||||||
|
}
|
||||||
if err := f.UpdateEntry(ctx, oldEntry, entry); err != nil {
|
if err := f.UpdateEntry(ctx, oldEntry, entry); err != nil {
|
||||||
glog.Errorf("update entry %s: %v", entry.FullPath, err)
|
glog.Errorf("update entry %s: %v", entry.FullPath, err)
|
||||||
return fmt.Errorf("update entry %s: %v", entry.FullPath, err)
|
return fmt.Errorf("update entry %s: %v", entry.FullPath, err)
|
||||||
@ -167,6 +178,8 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
|
|||||||
|
|
||||||
f.deleteChunksIfNotNew(oldEntry, entry)
|
f.deleteChunksIfNotNew(oldEntry, entry)
|
||||||
|
|
||||||
|
glog.V(4).Infof("CreateEntry %s: created", entry.FullPath)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -203,67 +216,6 @@ func (f *Filer) FindEntry(ctx context.Context, p FullPath) (entry *Entry, err er
|
|||||||
return f.store.FindEntry(ctx, p)
|
return f.store.FindEntry(ctx, p)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f *Filer) DeleteEntryMetaAndData(ctx context.Context, p FullPath, isRecursive bool, ignoreRecursiveError, shouldDeleteChunks bool) (err error) {
|
|
||||||
entry, err := f.FindEntry(ctx, p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if entry.IsDirectory() {
|
|
||||||
limit := int(1)
|
|
||||||
if isRecursive {
|
|
||||||
limit = math.MaxInt32
|
|
||||||
}
|
|
||||||
lastFileName := ""
|
|
||||||
includeLastFile := false
|
|
||||||
for limit > 0 {
|
|
||||||
entries, err := f.ListDirectoryEntries(ctx, p, lastFileName, includeLastFile, 1024)
|
|
||||||
if err != nil {
|
|
||||||
glog.Errorf("list folder %s: %v", p, err)
|
|
||||||
return fmt.Errorf("list folder %s: %v", p, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(entries) == 0 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if isRecursive {
|
|
||||||
for _, sub := range entries {
|
|
||||||
lastFileName = sub.Name()
|
|
||||||
err = f.DeleteEntryMetaAndData(ctx, sub.FullPath, isRecursive, ignoreRecursiveError, shouldDeleteChunks)
|
|
||||||
if err != nil && !ignoreRecursiveError {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
limit--
|
|
||||||
if limit <= 0 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(entries) < 1024 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
f.cacheDelDirectory(string(p))
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
if shouldDeleteChunks {
|
|
||||||
f.DeleteChunks(p, entry.Chunks)
|
|
||||||
}
|
|
||||||
|
|
||||||
if p == "/" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
glog.V(3).Infof("deleting entry %v", p)
|
|
||||||
|
|
||||||
f.NotifyUpdateEvent(entry, nil, shouldDeleteChunks)
|
|
||||||
|
|
||||||
return f.store.DeleteEntry(ctx, p)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *Filer) ListDirectoryEntries(ctx context.Context, p FullPath, startFileName string, inclusive bool, limit int) ([]*Entry, error) {
|
func (f *Filer) ListDirectoryEntries(ctx context.Context, p FullPath, startFileName string, inclusive bool, limit int) ([]*Entry, error) {
|
||||||
if strings.HasSuffix(string(p), "/") && len(p) > 1 {
|
if strings.HasSuffix(string(p), "/") && len(p) > 1 {
|
||||||
p = p[0 : len(p)-1]
|
p = p[0 : len(p)-1]
|
||||||
|
@ -3,6 +3,8 @@ package filer2
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
@ -20,10 +22,10 @@ func VolumeId(fileId string) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type FilerClient interface {
|
type FilerClient interface {
|
||||||
WithFilerClient(ctx context.Context, fn func(filer_pb.SeaweedFilerClient) error) error
|
WithFilerClient(ctx context.Context, fn func(context.Context, filer_pb.SeaweedFilerClient) error) error
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath string, buff []byte, chunkViews []*ChunkView, baseOffset int64) (totalRead int64, err error) {
|
func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath FullPath, buff []byte, chunkViews []*ChunkView, baseOffset int64) (totalRead int64, err error) {
|
||||||
var vids []string
|
var vids []string
|
||||||
for _, chunkView := range chunkViews {
|
for _, chunkView := range chunkViews {
|
||||||
vids = append(vids, VolumeId(chunkView.FileId))
|
vids = append(vids, VolumeId(chunkView.FileId))
|
||||||
@ -31,7 +33,7 @@ func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath s
|
|||||||
|
|
||||||
vid2Locations := make(map[string]*filer_pb.Locations)
|
vid2Locations := make(map[string]*filer_pb.Locations)
|
||||||
|
|
||||||
err = filerClient.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err = filerClient.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
glog.V(4).Infof("read fh lookup volume id locations: %v", vids)
|
glog.V(4).Infof("read fh lookup volume id locations: %v", vids)
|
||||||
resp, err := client.LookupVolume(ctx, &filer_pb.LookupVolumeRequest{
|
resp, err := client.LookupVolume(ctx, &filer_pb.LookupVolumeRequest{
|
||||||
@ -91,68 +93,75 @@ func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath s
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetEntry(ctx context.Context, filerClient FilerClient, fullFilePath string) (entry *filer_pb.Entry, err error) {
|
func GetEntry(ctx context.Context, filerClient FilerClient, fullFilePath FullPath) (entry *filer_pb.Entry, err error) {
|
||||||
|
|
||||||
dir, name := FullPath(fullFilePath).DirAndName()
|
dir, name := fullFilePath.DirAndName()
|
||||||
|
|
||||||
err = filerClient.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err = filerClient.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.LookupDirectoryEntryRequest{
|
request := &filer_pb.LookupDirectoryEntryRequest{
|
||||||
Directory: dir,
|
Directory: dir,
|
||||||
Name: name,
|
Name: name,
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(3).Infof("read %s request: %v", fullFilePath, request)
|
// glog.V(3).Infof("read %s request: %v", fullFilePath, request)
|
||||||
resp, err := client.LookupDirectoryEntry(ctx, request)
|
resp, err := client.LookupDirectoryEntry(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if err == ErrNotFound || strings.Contains(err.Error(), ErrNotFound.Error()) {
|
if err == ErrNotFound || strings.Contains(err.Error(), ErrNotFound.Error()) {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
glog.V(3).Infof("read %s attr %v: %v", fullFilePath, request, err)
|
glog.V(3).Infof("read %s %v: %v", fullFilePath, resp, err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if resp.Entry != nil {
|
if resp.Entry == nil {
|
||||||
entry = resp.Entry
|
// glog.V(3).Infof("read %s entry: %v", fullFilePath, entry)
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
entry = resp.Entry
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadDirAllEntries(ctx context.Context, filerClient FilerClient, fullDirPath string, fn func(entry *filer_pb.Entry)) (err error) {
|
func ReadDirAllEntries(ctx context.Context, filerClient FilerClient, fullDirPath FullPath, prefix string, fn func(entry *filer_pb.Entry, isLast bool)) (err error) {
|
||||||
|
|
||||||
err = filerClient.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err = filerClient.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
paginationLimit := 1024
|
|
||||||
|
|
||||||
lastEntryName := ""
|
lastEntryName := ""
|
||||||
|
|
||||||
|
request := &filer_pb.ListEntriesRequest{
|
||||||
|
Directory: string(fullDirPath),
|
||||||
|
Prefix: prefix,
|
||||||
|
StartFromFileName: lastEntryName,
|
||||||
|
Limit: math.MaxUint32,
|
||||||
|
}
|
||||||
|
|
||||||
|
glog.V(3).Infof("read directory: %v", request)
|
||||||
|
stream, err := client.ListEntries(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("list %s: %v", fullDirPath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var prevEntry *filer_pb.Entry
|
||||||
for {
|
for {
|
||||||
|
resp, recvErr := stream.Recv()
|
||||||
request := &filer_pb.ListEntriesRequest{
|
if recvErr != nil {
|
||||||
Directory: fullDirPath,
|
if recvErr == io.EOF {
|
||||||
StartFromFileName: lastEntryName,
|
if prevEntry != nil {
|
||||||
Limit: uint32(paginationLimit),
|
fn(prevEntry, true)
|
||||||
|
}
|
||||||
|
break
|
||||||
|
} else {
|
||||||
|
return recvErr
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
if prevEntry != nil {
|
||||||
glog.V(3).Infof("read directory: %v", request)
|
fn(prevEntry, false)
|
||||||
resp, err := client.ListEntries(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("list %s: %v", fullDirPath, err)
|
|
||||||
}
|
}
|
||||||
|
prevEntry = resp.Entry
|
||||||
for _, entry := range resp.Entries {
|
|
||||||
fn(entry)
|
|
||||||
lastEntryName = entry.Name
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(resp.Entries) < paginationLimit {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
102
weed/filer2/filer_delete_entry.go
Normal file
102
weed/filer2/filer_delete_entry.go
Normal file
@ -0,0 +1,102 @@
|
|||||||
|
package filer2
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (f *Filer) DeleteEntryMetaAndData(ctx context.Context, p FullPath, isRecursive bool, ignoreRecursiveError, shouldDeleteChunks bool) (err error) {
|
||||||
|
if p == "/" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
entry, findErr := f.FindEntry(ctx, p)
|
||||||
|
if findErr != nil {
|
||||||
|
return findErr
|
||||||
|
}
|
||||||
|
|
||||||
|
var chunks []*filer_pb.FileChunk
|
||||||
|
chunks = append(chunks, entry.Chunks...)
|
||||||
|
if entry.IsDirectory() {
|
||||||
|
// delete the folder children, not including the folder itself
|
||||||
|
var dirChunks []*filer_pb.FileChunk
|
||||||
|
dirChunks, err = f.doBatchDeleteFolderMetaAndData(ctx, entry, isRecursive, ignoreRecursiveError, shouldDeleteChunks)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete directory %s: %v", p, err)
|
||||||
|
}
|
||||||
|
chunks = append(chunks, dirChunks...)
|
||||||
|
f.cacheDelDirectory(string(p))
|
||||||
|
}
|
||||||
|
// delete the file or folder
|
||||||
|
err = f.doDeleteEntryMetaAndData(ctx, entry, shouldDeleteChunks)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete file %s: %v", p, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if shouldDeleteChunks {
|
||||||
|
go f.DeleteChunks(chunks)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Filer) doBatchDeleteFolderMetaAndData(ctx context.Context, entry *Entry, isRecursive bool, ignoreRecursiveError, shouldDeleteChunks bool) (chunks []*filer_pb.FileChunk, err error) {
|
||||||
|
|
||||||
|
lastFileName := ""
|
||||||
|
includeLastFile := false
|
||||||
|
for {
|
||||||
|
entries, err := f.ListDirectoryEntries(ctx, entry.FullPath, lastFileName, includeLastFile, PaginationSize)
|
||||||
|
if err != nil {
|
||||||
|
glog.Errorf("list folder %s: %v", entry.FullPath, err)
|
||||||
|
return nil, fmt.Errorf("list folder %s: %v", entry.FullPath, err)
|
||||||
|
}
|
||||||
|
if lastFileName == "" && !isRecursive && len(entries) > 0 {
|
||||||
|
// only for first iteration in the loop
|
||||||
|
return nil, fmt.Errorf("fail to delete non-empty folder: %s", entry.FullPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, sub := range entries {
|
||||||
|
lastFileName = sub.Name()
|
||||||
|
var dirChunks []*filer_pb.FileChunk
|
||||||
|
if sub.IsDirectory() {
|
||||||
|
dirChunks, err = f.doBatchDeleteFolderMetaAndData(ctx, sub, isRecursive, ignoreRecursiveError, shouldDeleteChunks)
|
||||||
|
}
|
||||||
|
if err != nil && !ignoreRecursiveError {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if shouldDeleteChunks {
|
||||||
|
chunks = append(chunks, dirChunks...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(entries) < PaginationSize {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
f.cacheDelDirectory(string(entry.FullPath))
|
||||||
|
|
||||||
|
glog.V(3).Infof("deleting directory %v", entry.FullPath)
|
||||||
|
|
||||||
|
if storeDeletionErr := f.store.DeleteFolderChildren(ctx, entry.FullPath); storeDeletionErr != nil {
|
||||||
|
return nil, fmt.Errorf("filer store delete: %v", storeDeletionErr)
|
||||||
|
}
|
||||||
|
f.NotifyUpdateEvent(entry, nil, shouldDeleteChunks)
|
||||||
|
|
||||||
|
return chunks, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Filer) doDeleteEntryMetaAndData(ctx context.Context, entry *Entry, shouldDeleteChunks bool) (err error) {
|
||||||
|
|
||||||
|
glog.V(3).Infof("deleting entry %v", entry.FullPath)
|
||||||
|
|
||||||
|
if storeDeletionErr := f.store.DeleteEntry(ctx, entry.FullPath); storeDeletionErr != nil {
|
||||||
|
return fmt.Errorf("filer store delete: %v", storeDeletionErr)
|
||||||
|
}
|
||||||
|
f.NotifyUpdateEvent(entry, nil, shouldDeleteChunks)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
@ -51,9 +51,8 @@ func (f *Filer) loopProcessingDeletion() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f *Filer) DeleteChunks(fullpath FullPath, chunks []*filer_pb.FileChunk) {
|
func (f *Filer) DeleteChunks(chunks []*filer_pb.FileChunk) {
|
||||||
for _, chunk := range chunks {
|
for _, chunk := range chunks {
|
||||||
glog.V(3).Infof("deleting %s chunk %s", fullpath, chunk.String())
|
|
||||||
f.fileIdDeletionChan <- chunk.GetFileIdString()
|
f.fileIdDeletionChan <- chunk.GetFileIdString()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -70,7 +69,7 @@ func (f *Filer) deleteChunksIfNotNew(oldEntry, newEntry *Entry) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
if newEntry == nil {
|
if newEntry == nil {
|
||||||
f.DeleteChunks(oldEntry.FullPath, oldEntry.Chunks)
|
f.DeleteChunks(oldEntry.Chunks)
|
||||||
}
|
}
|
||||||
|
|
||||||
var toDelete []*filer_pb.FileChunk
|
var toDelete []*filer_pb.FileChunk
|
||||||
@ -84,5 +83,5 @@ func (f *Filer) deleteChunksIfNotNew(oldEntry, newEntry *Entry) {
|
|||||||
toDelete = append(toDelete, oldChunk)
|
toDelete = append(toDelete, oldChunk)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
f.DeleteChunks(oldEntry.FullPath, toDelete)
|
f.DeleteChunks(toDelete)
|
||||||
}
|
}
|
||||||
|
@ -14,12 +14,13 @@ type FilerStore interface {
|
|||||||
// GetName gets the name to locate the configuration in filer.toml file
|
// GetName gets the name to locate the configuration in filer.toml file
|
||||||
GetName() string
|
GetName() string
|
||||||
// Initialize initializes the file store
|
// Initialize initializes the file store
|
||||||
Initialize(configuration util.Configuration) error
|
Initialize(configuration util.Configuration, prefix string) error
|
||||||
InsertEntry(context.Context, *Entry) error
|
InsertEntry(context.Context, *Entry) error
|
||||||
UpdateEntry(context.Context, *Entry) (err error)
|
UpdateEntry(context.Context, *Entry) (err error)
|
||||||
// err == filer2.ErrNotFound if not found
|
// err == filer2.ErrNotFound if not found
|
||||||
FindEntry(context.Context, FullPath) (entry *Entry, err error)
|
FindEntry(context.Context, FullPath) (entry *Entry, err error)
|
||||||
DeleteEntry(context.Context, FullPath) (err error)
|
DeleteEntry(context.Context, FullPath) (err error)
|
||||||
|
DeleteFolderChildren(context.Context, FullPath) (err error)
|
||||||
ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error)
|
ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error)
|
||||||
|
|
||||||
BeginTransaction(ctx context.Context) (context.Context, error)
|
BeginTransaction(ctx context.Context) (context.Context, error)
|
||||||
@ -46,8 +47,8 @@ func (fsw *FilerStoreWrapper) GetName() string {
|
|||||||
return fsw.actualStore.GetName()
|
return fsw.actualStore.GetName()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (fsw *FilerStoreWrapper) Initialize(configuration util.Configuration) error {
|
func (fsw *FilerStoreWrapper) Initialize(configuration util.Configuration, prefix string) error {
|
||||||
return fsw.actualStore.Initialize(configuration)
|
return fsw.actualStore.Initialize(configuration, prefix)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (fsw *FilerStoreWrapper) InsertEntry(ctx context.Context, entry *Entry) error {
|
func (fsw *FilerStoreWrapper) InsertEntry(ctx context.Context, entry *Entry) error {
|
||||||
@ -97,6 +98,16 @@ func (fsw *FilerStoreWrapper) DeleteEntry(ctx context.Context, fp FullPath) (err
|
|||||||
return fsw.actualStore.DeleteEntry(ctx, fp)
|
return fsw.actualStore.DeleteEntry(ctx, fp)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (fsw *FilerStoreWrapper) DeleteFolderChildren(ctx context.Context, fp FullPath) (err error) {
|
||||||
|
stats.FilerStoreCounter.WithLabelValues(fsw.actualStore.GetName(), "deleteFolderChildren").Inc()
|
||||||
|
start := time.Now()
|
||||||
|
defer func() {
|
||||||
|
stats.FilerStoreHistogram.WithLabelValues(fsw.actualStore.GetName(), "deleteFolderChildren").Observe(time.Since(start).Seconds())
|
||||||
|
}()
|
||||||
|
|
||||||
|
return fsw.actualStore.DeleteFolderChildren(ctx, fp)
|
||||||
|
}
|
||||||
|
|
||||||
func (fsw *FilerStoreWrapper) ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error) {
|
func (fsw *FilerStoreWrapper) ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error) {
|
||||||
stats.FilerStoreCounter.WithLabelValues(fsw.actualStore.GetName(), "list").Inc()
|
stats.FilerStoreCounter.WithLabelValues(fsw.actualStore.GetName(), "list").Inc()
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
|
@ -3,6 +3,8 @@ package filer2
|
|||||||
import (
|
import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
)
|
)
|
||||||
|
|
||||||
type FullPath string
|
type FullPath string
|
||||||
@ -34,3 +36,7 @@ func (fp FullPath) Child(name string) FullPath {
|
|||||||
}
|
}
|
||||||
return FullPath(dir + "/" + name)
|
return FullPath(dir + "/" + name)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (fp FullPath) AsInode() uint64 {
|
||||||
|
return uint64(util.HashStringToLong(string(fp)))
|
||||||
|
}
|
||||||
|
@ -5,12 +5,13 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
|
||||||
weed_util "github.com/chrislusf/seaweedfs/weed/util"
|
|
||||||
"github.com/syndtr/goleveldb/leveldb"
|
"github.com/syndtr/goleveldb/leveldb"
|
||||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||||
leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
|
leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
weed_util "github.com/chrislusf/seaweedfs/weed/util"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@ -29,8 +30,8 @@ func (store *LevelDBStore) GetName() string {
|
|||||||
return "leveldb"
|
return "leveldb"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *LevelDBStore) Initialize(configuration weed_util.Configuration) (err error) {
|
func (store *LevelDBStore) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
|
||||||
dir := configuration.GetString("dir")
|
dir := configuration.GetString(prefix + "dir")
|
||||||
return store.initialize(dir)
|
return store.initialize(dir)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -123,6 +124,34 @@ func (store *LevelDBStore) DeleteEntry(ctx context.Context, fullpath filer2.Full
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *LevelDBStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
|
||||||
|
batch := new(leveldb.Batch)
|
||||||
|
|
||||||
|
directoryPrefix := genDirectoryKeyPrefix(fullpath, "")
|
||||||
|
iter := store.db.NewIterator(&leveldb_util.Range{Start: directoryPrefix}, nil)
|
||||||
|
for iter.Next() {
|
||||||
|
key := iter.Key()
|
||||||
|
if !bytes.HasPrefix(key, directoryPrefix) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fileName := getNameFromKey(key)
|
||||||
|
if fileName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
batch.Delete([]byte(genKey(string(fullpath), fileName)))
|
||||||
|
}
|
||||||
|
iter.Release()
|
||||||
|
|
||||||
|
err = store.db.Write(batch, nil)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete %s : %v", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *LevelDBStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
func (store *LevelDBStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
||||||
limit int) (entries []*filer2.Entry, err error) {
|
limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ func TestCreateAndFind(t *testing.T) {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := filer.CreateEntry(ctx, entry1); err != nil {
|
if err := filer.CreateEntry(ctx, entry1, false); err != nil {
|
||||||
t.Errorf("create entry %v: %v", entry1.FullPath, err)
|
t.Errorf("create entry %v: %v", entry1.FullPath, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -8,12 +8,13 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
|
||||||
weed_util "github.com/chrislusf/seaweedfs/weed/util"
|
|
||||||
"github.com/syndtr/goleveldb/leveldb"
|
"github.com/syndtr/goleveldb/leveldb"
|
||||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||||
leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
|
leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
weed_util "github.com/chrislusf/seaweedfs/weed/util"
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@ -29,8 +30,8 @@ func (store *LevelDB2Store) GetName() string {
|
|||||||
return "leveldb2"
|
return "leveldb2"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *LevelDB2Store) Initialize(configuration weed_util.Configuration) (err error) {
|
func (store *LevelDB2Store) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
|
||||||
dir := configuration.GetString("dir")
|
dir := configuration.GetString(prefix + "dir")
|
||||||
return store.initialize(dir, 8)
|
return store.initialize(dir, 8)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -134,6 +135,34 @@ func (store *LevelDB2Store) DeleteEntry(ctx context.Context, fullpath filer2.Ful
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *LevelDB2Store) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
directoryPrefix, partitionId := genDirectoryKeyPrefix(fullpath, "", store.dbCount)
|
||||||
|
|
||||||
|
batch := new(leveldb.Batch)
|
||||||
|
|
||||||
|
iter := store.dbs[partitionId].NewIterator(&leveldb_util.Range{Start: directoryPrefix}, nil)
|
||||||
|
for iter.Next() {
|
||||||
|
key := iter.Key()
|
||||||
|
if !bytes.HasPrefix(key, directoryPrefix) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fileName := getNameFromKey(key)
|
||||||
|
if fileName == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
batch.Delete(append(directoryPrefix, []byte(fileName)...))
|
||||||
|
}
|
||||||
|
iter.Release()
|
||||||
|
|
||||||
|
err = store.dbs[partitionId].Write(batch, nil)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete %s : %v", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *LevelDB2Store) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
func (store *LevelDB2Store) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
||||||
limit int) (entries []*filer2.Entry, err error) {
|
limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ func TestCreateAndFind(t *testing.T) {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := filer.CreateEntry(ctx, entry1); err != nil {
|
if err := filer.CreateEntry(ctx, entry1, false); err != nil {
|
||||||
t.Errorf("create entry %v: %v", entry1.FullPath, err)
|
t.Errorf("create entry %v: %v", entry1.FullPath, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -1,132 +0,0 @@
|
|||||||
package memdb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
|
||||||
"github.com/google/btree"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
filer2.Stores = append(filer2.Stores, &MemDbStore{})
|
|
||||||
}
|
|
||||||
|
|
||||||
type MemDbStore struct {
|
|
||||||
tree *btree.BTree
|
|
||||||
treeLock sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
type entryItem struct {
|
|
||||||
*filer2.Entry
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a entryItem) Less(b btree.Item) bool {
|
|
||||||
return strings.Compare(string(a.FullPath), string(b.(entryItem).FullPath)) < 0
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) GetName() string {
|
|
||||||
return "memory"
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) Initialize(configuration util.Configuration) (err error) {
|
|
||||||
store.tree = btree.New(8)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) BeginTransaction(ctx context.Context) (context.Context, error) {
|
|
||||||
return ctx, nil
|
|
||||||
}
|
|
||||||
func (store *MemDbStore) CommitTransaction(ctx context.Context) error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
func (store *MemDbStore) RollbackTransaction(ctx context.Context) error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) InsertEntry(ctx context.Context, entry *filer2.Entry) (err error) {
|
|
||||||
// println("inserting", entry.FullPath)
|
|
||||||
store.treeLock.Lock()
|
|
||||||
store.tree.ReplaceOrInsert(entryItem{entry})
|
|
||||||
store.treeLock.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) UpdateEntry(ctx context.Context, entry *filer2.Entry) (err error) {
|
|
||||||
if _, err = store.FindEntry(ctx, entry.FullPath); err != nil {
|
|
||||||
return fmt.Errorf("no such file %s : %v", entry.FullPath, err)
|
|
||||||
}
|
|
||||||
store.treeLock.Lock()
|
|
||||||
store.tree.ReplaceOrInsert(entryItem{entry})
|
|
||||||
store.treeLock.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (entry *filer2.Entry, err error) {
|
|
||||||
item := store.tree.Get(entryItem{&filer2.Entry{FullPath: fullpath}})
|
|
||||||
if item == nil {
|
|
||||||
return nil, filer2.ErrNotFound
|
|
||||||
}
|
|
||||||
entry = item.(entryItem).Entry
|
|
||||||
return entry, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) DeleteEntry(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
|
||||||
store.treeLock.Lock()
|
|
||||||
store.tree.Delete(entryItem{&filer2.Entry{FullPath: fullpath}})
|
|
||||||
store.treeLock.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (store *MemDbStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int) (entries []*filer2.Entry, err error) {
|
|
||||||
|
|
||||||
startFrom := string(fullpath)
|
|
||||||
if startFileName != "" {
|
|
||||||
startFrom = startFrom + "/" + startFileName
|
|
||||||
}
|
|
||||||
|
|
||||||
store.tree.AscendGreaterOrEqual(entryItem{&filer2.Entry{FullPath: filer2.FullPath(startFrom)}},
|
|
||||||
func(item btree.Item) bool {
|
|
||||||
if limit <= 0 {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
entry := item.(entryItem).Entry
|
|
||||||
// println("checking", entry.FullPath)
|
|
||||||
|
|
||||||
if entry.FullPath == fullpath {
|
|
||||||
// skipping the current directory
|
|
||||||
// println("skipping the folder", entry.FullPath)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
dir, name := entry.FullPath.DirAndName()
|
|
||||||
if name == startFileName {
|
|
||||||
if inclusive {
|
|
||||||
limit--
|
|
||||||
entries = append(entries, entry)
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// only iterate the same prefix
|
|
||||||
if !strings.HasPrefix(string(entry.FullPath), string(fullpath)) {
|
|
||||||
// println("breaking from", entry.FullPath)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
if dir != string(fullpath) {
|
|
||||||
// this could be items in deeper directories
|
|
||||||
// println("skipping deeper folder", entry.FullPath)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
// now process the directory items
|
|
||||||
// println("adding entry", entry.FullPath)
|
|
||||||
limit--
|
|
||||||
entries = append(entries, entry)
|
|
||||||
return true
|
|
||||||
},
|
|
||||||
)
|
|
||||||
return entries, nil
|
|
||||||
}
|
|
@ -1,149 +0,0 @@
|
|||||||
package memdb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestCreateAndFind(t *testing.T) {
|
|
||||||
filer := filer2.NewFiler(nil, nil)
|
|
||||||
store := &MemDbStore{}
|
|
||||||
store.Initialize(nil)
|
|
||||||
filer.SetStore(store)
|
|
||||||
filer.DisableDirectoryCache()
|
|
||||||
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
fullpath := filer2.FullPath("/home/chris/this/is/one/file1.jpg")
|
|
||||||
|
|
||||||
entry1 := &filer2.Entry{
|
|
||||||
FullPath: fullpath,
|
|
||||||
Attr: filer2.Attr{
|
|
||||||
Mode: 0440,
|
|
||||||
Uid: 1234,
|
|
||||||
Gid: 5678,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := filer.CreateEntry(ctx, entry1); err != nil {
|
|
||||||
t.Errorf("create entry %v: %v", entry1.FullPath, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
entry, err := filer.FindEntry(ctx, fullpath)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("find entry: %v", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if entry.FullPath != entry1.FullPath {
|
|
||||||
t.Errorf("find wrong entry: %v", entry.FullPath)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateFileAndList(t *testing.T) {
|
|
||||||
filer := filer2.NewFiler(nil, nil)
|
|
||||||
store := &MemDbStore{}
|
|
||||||
store.Initialize(nil)
|
|
||||||
filer.SetStore(store)
|
|
||||||
filer.DisableDirectoryCache()
|
|
||||||
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
entry1 := &filer2.Entry{
|
|
||||||
FullPath: filer2.FullPath("/home/chris/this/is/one/file1.jpg"),
|
|
||||||
Attr: filer2.Attr{
|
|
||||||
Mode: 0440,
|
|
||||||
Uid: 1234,
|
|
||||||
Gid: 5678,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
entry2 := &filer2.Entry{
|
|
||||||
FullPath: filer2.FullPath("/home/chris/this/is/one/file2.jpg"),
|
|
||||||
Attr: filer2.Attr{
|
|
||||||
Mode: 0440,
|
|
||||||
Uid: 1234,
|
|
||||||
Gid: 5678,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
filer.CreateEntry(ctx, entry1)
|
|
||||||
filer.CreateEntry(ctx, entry2)
|
|
||||||
|
|
||||||
// checking the 2 files
|
|
||||||
entries, err := filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is/one/"), "", false, 100)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("list entries: %v", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(entries) != 2 {
|
|
||||||
t.Errorf("list entries count: %v", len(entries))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if entries[0].FullPath != entry1.FullPath {
|
|
||||||
t.Errorf("find wrong entry 1: %v", entries[0].FullPath)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if entries[1].FullPath != entry2.FullPath {
|
|
||||||
t.Errorf("find wrong entry 2: %v", entries[1].FullPath)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// checking the offset
|
|
||||||
entries, err = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is/one/"), "file1.jpg", false, 100)
|
|
||||||
if len(entries) != 1 {
|
|
||||||
t.Errorf("list entries count: %v", len(entries))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// checking one upper directory
|
|
||||||
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is"), "", false, 100)
|
|
||||||
if len(entries) != 1 {
|
|
||||||
t.Errorf("list entries count: %v", len(entries))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// checking root directory
|
|
||||||
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/"), "", false, 100)
|
|
||||||
if len(entries) != 1 {
|
|
||||||
t.Errorf("list entries count: %v", len(entries))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// add file3
|
|
||||||
file3Path := filer2.FullPath("/home/chris/this/is/file3.jpg")
|
|
||||||
entry3 := &filer2.Entry{
|
|
||||||
FullPath: file3Path,
|
|
||||||
Attr: filer2.Attr{
|
|
||||||
Mode: 0440,
|
|
||||||
Uid: 1234,
|
|
||||||
Gid: 5678,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
filer.CreateEntry(ctx, entry3)
|
|
||||||
|
|
||||||
// checking one upper directory
|
|
||||||
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is"), "", false, 100)
|
|
||||||
if len(entries) != 2 {
|
|
||||||
t.Errorf("list entries count: %v", len(entries))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// delete file and count
|
|
||||||
filer.DeleteEntryMetaAndData(ctx, file3Path, false, false, false)
|
|
||||||
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is"), "", false, 100)
|
|
||||||
if len(entries) != 1 {
|
|
||||||
t.Errorf("list entries count: %v", len(entries))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
@ -26,28 +26,35 @@ func (store *MysqlStore) GetName() string {
|
|||||||
return "mysql"
|
return "mysql"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *MysqlStore) Initialize(configuration util.Configuration) (err error) {
|
func (store *MysqlStore) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
return store.initialize(
|
return store.initialize(
|
||||||
configuration.GetString("username"),
|
configuration.GetString(prefix+"username"),
|
||||||
configuration.GetString("password"),
|
configuration.GetString(prefix+"password"),
|
||||||
configuration.GetString("hostname"),
|
configuration.GetString(prefix+"hostname"),
|
||||||
configuration.GetInt("port"),
|
configuration.GetInt(prefix+"port"),
|
||||||
configuration.GetString("database"),
|
configuration.GetString(prefix+"database"),
|
||||||
configuration.GetInt("connection_max_idle"),
|
configuration.GetInt(prefix+"connection_max_idle"),
|
||||||
configuration.GetInt("connection_max_open"),
|
configuration.GetInt(prefix+"connection_max_open"),
|
||||||
|
configuration.GetBool(prefix+"interpolateParams"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *MysqlStore) initialize(user, password, hostname string, port int, database string, maxIdle, maxOpen int) (err error) {
|
func (store *MysqlStore) initialize(user, password, hostname string, port int, database string, maxIdle, maxOpen int,
|
||||||
|
interpolateParams bool) (err error) {
|
||||||
|
|
||||||
store.SqlInsert = "INSERT INTO filemeta (dirhash,name,directory,meta) VALUES(?,?,?,?)"
|
store.SqlInsert = "INSERT INTO filemeta (dirhash,name,directory,meta) VALUES(?,?,?,?)"
|
||||||
store.SqlUpdate = "UPDATE filemeta SET meta=? WHERE dirhash=? AND name=? AND directory=?"
|
store.SqlUpdate = "UPDATE filemeta SET meta=? WHERE dirhash=? AND name=? AND directory=?"
|
||||||
store.SqlFind = "SELECT meta FROM filemeta WHERE dirhash=? AND name=? AND directory=?"
|
store.SqlFind = "SELECT meta FROM filemeta WHERE dirhash=? AND name=? AND directory=?"
|
||||||
store.SqlDelete = "DELETE FROM filemeta WHERE dirhash=? AND name=? AND directory=?"
|
store.SqlDelete = "DELETE FROM filemeta WHERE dirhash=? AND name=? AND directory=?"
|
||||||
|
store.SqlDeleteFolderChildren = "DELETE FROM filemeta WHERE dirhash=? AND directory=?"
|
||||||
store.SqlListExclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>? AND directory=? ORDER BY NAME ASC LIMIT ?"
|
store.SqlListExclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>? AND directory=? ORDER BY NAME ASC LIMIT ?"
|
||||||
store.SqlListInclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>=? AND directory=? ORDER BY NAME ASC LIMIT ?"
|
store.SqlListInclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>=? AND directory=? ORDER BY NAME ASC LIMIT ?"
|
||||||
|
|
||||||
sqlUrl := fmt.Sprintf(CONNECTION_URL_PATTERN, user, password, hostname, port, database)
|
sqlUrl := fmt.Sprintf(CONNECTION_URL_PATTERN, user, password, hostname, port, database)
|
||||||
|
if interpolateParams {
|
||||||
|
sqlUrl += "&interpolateParams=true"
|
||||||
|
}
|
||||||
|
|
||||||
var dbErr error
|
var dbErr error
|
||||||
store.DB, dbErr = sql.Open("mysql", sqlUrl)
|
store.DB, dbErr = sql.Open("mysql", sqlUrl)
|
||||||
if dbErr != nil {
|
if dbErr != nil {
|
||||||
|
@ -26,16 +26,16 @@ func (store *PostgresStore) GetName() string {
|
|||||||
return "postgres"
|
return "postgres"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *PostgresStore) Initialize(configuration util.Configuration) (err error) {
|
func (store *PostgresStore) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
return store.initialize(
|
return store.initialize(
|
||||||
configuration.GetString("username"),
|
configuration.GetString(prefix+"username"),
|
||||||
configuration.GetString("password"),
|
configuration.GetString(prefix+"password"),
|
||||||
configuration.GetString("hostname"),
|
configuration.GetString(prefix+"hostname"),
|
||||||
configuration.GetInt("port"),
|
configuration.GetInt(prefix+"port"),
|
||||||
configuration.GetString("database"),
|
configuration.GetString(prefix+"database"),
|
||||||
configuration.GetString("sslmode"),
|
configuration.GetString(prefix+"sslmode"),
|
||||||
configuration.GetInt("connection_max_idle"),
|
configuration.GetInt(prefix+"connection_max_idle"),
|
||||||
configuration.GetInt("connection_max_open"),
|
configuration.GetInt(prefix+"connection_max_open"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -45,6 +45,7 @@ func (store *PostgresStore) initialize(user, password, hostname string, port int
|
|||||||
store.SqlUpdate = "UPDATE filemeta SET meta=$1 WHERE dirhash=$2 AND name=$3 AND directory=$4"
|
store.SqlUpdate = "UPDATE filemeta SET meta=$1 WHERE dirhash=$2 AND name=$3 AND directory=$4"
|
||||||
store.SqlFind = "SELECT meta FROM filemeta WHERE dirhash=$1 AND name=$2 AND directory=$3"
|
store.SqlFind = "SELECT meta FROM filemeta WHERE dirhash=$1 AND name=$2 AND directory=$3"
|
||||||
store.SqlDelete = "DELETE FROM filemeta WHERE dirhash=$1 AND name=$2 AND directory=$3"
|
store.SqlDelete = "DELETE FROM filemeta WHERE dirhash=$1 AND name=$2 AND directory=$3"
|
||||||
|
store.SqlDeleteFolderChildren = "DELETE FROM filemeta WHERE dirhash=$1 AND directory=$2"
|
||||||
store.SqlListExclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=$1 AND name>$2 AND directory=$3 ORDER BY NAME ASC LIMIT $4"
|
store.SqlListExclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=$1 AND name>$2 AND directory=$3 ORDER BY NAME ASC LIMIT $4"
|
||||||
store.SqlListInclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=$1 AND name>=$2 AND directory=$3 ORDER BY NAME ASC LIMIT $4"
|
store.SqlListInclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=$1 AND name>=$2 AND directory=$3 ORDER BY NAME ASC LIMIT $4"
|
||||||
|
|
||||||
|
@ -18,17 +18,25 @@ func (store *RedisClusterStore) GetName() string {
|
|||||||
return "redis_cluster"
|
return "redis_cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *RedisClusterStore) Initialize(configuration util.Configuration) (err error) {
|
func (store *RedisClusterStore) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
|
|
||||||
|
configuration.SetDefault(prefix+"useReadOnly", true)
|
||||||
|
configuration.SetDefault(prefix+"routeByLatency", true)
|
||||||
|
|
||||||
return store.initialize(
|
return store.initialize(
|
||||||
configuration.GetStringSlice("addresses"),
|
configuration.GetStringSlice(prefix+"addresses"),
|
||||||
configuration.GetString("password"),
|
configuration.GetString(prefix+"password"),
|
||||||
|
configuration.GetBool(prefix+"useReadOnly"),
|
||||||
|
configuration.GetBool(prefix+"routeByLatency"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *RedisClusterStore) initialize(addresses []string, password string) (err error) {
|
func (store *RedisClusterStore) initialize(addresses []string, password string, readOnly, routeByLatency bool) (err error) {
|
||||||
store.Client = redis.NewClusterClient(&redis.ClusterOptions{
|
store.Client = redis.NewClusterClient(&redis.ClusterOptions{
|
||||||
Addrs: addresses,
|
Addrs: addresses,
|
||||||
Password: password,
|
Password: password,
|
||||||
|
ReadOnly: readOnly,
|
||||||
|
RouteByLatency: routeByLatency,
|
||||||
})
|
})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -18,11 +18,11 @@ func (store *RedisStore) GetName() string {
|
|||||||
return "redis"
|
return "redis"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *RedisStore) Initialize(configuration util.Configuration) (err error) {
|
func (store *RedisStore) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
return store.initialize(
|
return store.initialize(
|
||||||
configuration.GetString("address"),
|
configuration.GetString(prefix+"address"),
|
||||||
configuration.GetString("password"),
|
configuration.GetString(prefix+"password"),
|
||||||
configuration.GetInt("database"),
|
configuration.GetInt(prefix+"database"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -99,6 +99,24 @@ func (store *UniversalRedisStore) DeleteEntry(ctx context.Context, fullpath file
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *UniversalRedisStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
|
||||||
|
members, err := store.Client.SMembers(genDirectoryListKey(string(fullpath))).Result()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete folder %s : %v", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, fileName := range members {
|
||||||
|
path := filer2.NewFullPath(string(fullpath), fileName)
|
||||||
|
_, err = store.Client.Del(string(path)).Result()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete %s in parent dir: %v", fullpath, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *UniversalRedisStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
func (store *UniversalRedisStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
||||||
limit int) (entries []*filer2.Entry, err error) {
|
limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
|
||||||
|
@ -1,3 +1,6 @@
|
|||||||
|
// +build !386
|
||||||
|
// +build !arm
|
||||||
|
|
||||||
package tikv
|
package tikv
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -27,8 +30,8 @@ func (store *TikvStore) GetName() string {
|
|||||||
return "tikv"
|
return "tikv"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (store *TikvStore) Initialize(configuration weed_util.Configuration) (err error) {
|
func (store *TikvStore) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
|
||||||
pdAddr := configuration.GetString("pdAddress")
|
pdAddr := configuration.GetString(prefix + "pdAddress")
|
||||||
return store.initialize(pdAddr)
|
return store.initialize(pdAddr)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -138,6 +141,38 @@ func (store *TikvStore) DeleteEntry(ctx context.Context, fullpath filer2.FullPat
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
|
||||||
|
directoryPrefix := genDirectoryKeyPrefix(fullpath, "")
|
||||||
|
|
||||||
|
tx := store.getTx(ctx)
|
||||||
|
|
||||||
|
iter, err := tx.Iter(directoryPrefix, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("deleteFolderChildren %s: %v", fullpath, err)
|
||||||
|
}
|
||||||
|
defer iter.Close()
|
||||||
|
for iter.Valid() {
|
||||||
|
key := iter.Key()
|
||||||
|
if !bytes.HasPrefix(key, directoryPrefix) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fileName := getNameFromKey(key)
|
||||||
|
if fileName == "" {
|
||||||
|
iter.Next()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if err = tx.Delete(genKey(string(fullpath), fileName)); err != nil {
|
||||||
|
return fmt.Errorf("delete %s : %v", fullpath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
iter.Next()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (store *TikvStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
func (store *TikvStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
||||||
limit int) (entries []*filer2.Entry, err error) {
|
limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
|
||||||
|
65
weed/filer2/tikv/tikv_store_unsupported.go
Normal file
65
weed/filer2/tikv/tikv_store_unsupported.go
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
// +build 386 arm
|
||||||
|
|
||||||
|
package tikv
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
|
weed_util "github.com/chrislusf/seaweedfs/weed/util"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
filer2.Stores = append(filer2.Stores, &TikvStore{})
|
||||||
|
}
|
||||||
|
|
||||||
|
type TikvStore struct {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) GetName() string {
|
||||||
|
return "tikv"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) initialize(pdAddr string) (err error) {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) BeginTransaction(ctx context.Context) (context.Context, error) {
|
||||||
|
return nil, fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
func (store *TikvStore) CommitTransaction(ctx context.Context) error {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
func (store *TikvStore) RollbackTransaction(ctx context.Context) error {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) InsertEntry(ctx context.Context, entry *filer2.Entry) (err error) {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) UpdateEntry(ctx context.Context, entry *filer2.Entry) (err error) {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (entry *filer2.Entry, err error) {
|
||||||
|
return nil, fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) DeleteEntry(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
|
||||||
|
return fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *TikvStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
|
||||||
|
limit int) (entries []*filer2.Entry, err error) {
|
||||||
|
return nil, fmt.Errorf("not implemented for 32 bit computers")
|
||||||
|
}
|
@ -3,7 +3,7 @@ package filesys
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
@ -14,9 +14,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Dir struct {
|
type Dir struct {
|
||||||
Path string
|
Path string
|
||||||
wfs *WFS
|
wfs *WFS
|
||||||
attributes *filer_pb.FuseAttributes
|
entry *filer_pb.Entry
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ = fs.Node(&Dir{})
|
var _ = fs.Node(&Dir{})
|
||||||
@ -27,50 +27,56 @@ var _ = fs.HandleReadDirAller(&Dir{})
|
|||||||
var _ = fs.NodeRemover(&Dir{})
|
var _ = fs.NodeRemover(&Dir{})
|
||||||
var _ = fs.NodeRenamer(&Dir{})
|
var _ = fs.NodeRenamer(&Dir{})
|
||||||
var _ = fs.NodeSetattrer(&Dir{})
|
var _ = fs.NodeSetattrer(&Dir{})
|
||||||
|
var _ = fs.NodeGetxattrer(&Dir{})
|
||||||
|
var _ = fs.NodeSetxattrer(&Dir{})
|
||||||
|
var _ = fs.NodeRemovexattrer(&Dir{})
|
||||||
|
var _ = fs.NodeListxattrer(&Dir{})
|
||||||
|
var _ = fs.NodeForgetter(&Dir{})
|
||||||
|
|
||||||
func (dir *Dir) Attr(ctx context.Context, attr *fuse.Attr) error {
|
func (dir *Dir) Attr(ctx context.Context, attr *fuse.Attr) error {
|
||||||
|
|
||||||
|
glog.V(3).Infof("dir Attr %s, existing attr: %+v", dir.Path, attr)
|
||||||
|
|
||||||
// https://github.com/bazil/fuse/issues/196
|
// https://github.com/bazil/fuse/issues/196
|
||||||
attr.Valid = time.Second
|
attr.Valid = time.Second
|
||||||
|
|
||||||
if dir.Path == dir.wfs.option.FilerMountRootPath {
|
if dir.Path == dir.wfs.option.FilerMountRootPath {
|
||||||
dir.setRootDirAttributes(attr)
|
dir.setRootDirAttributes(attr)
|
||||||
|
glog.V(3).Infof("root dir Attr %s, attr: %+v", dir.Path, attr)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
item := dir.wfs.listDirectoryEntriesCache.Get(dir.Path)
|
if err := dir.maybeLoadEntry(ctx); err != nil {
|
||||||
if item != nil && !item.Expired() {
|
glog.V(3).Infof("dir Attr %s,err: %+v", dir.Path, err)
|
||||||
entry := item.Value().(*filer_pb.Entry)
|
|
||||||
|
|
||||||
attr.Mtime = time.Unix(entry.Attributes.Mtime, 0)
|
|
||||||
attr.Ctime = time.Unix(entry.Attributes.Crtime, 0)
|
|
||||||
attr.Mode = os.FileMode(entry.Attributes.FileMode)
|
|
||||||
attr.Gid = entry.Attributes.Gid
|
|
||||||
attr.Uid = entry.Attributes.Uid
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
entry, err := filer2.GetEntry(ctx, dir.wfs, dir.Path)
|
|
||||||
if err != nil {
|
|
||||||
glog.V(2).Infof("read dir %s attr: %v, error: %v", dir.Path, dir.attributes, err)
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
dir.attributes = entry.Attributes
|
|
||||||
|
|
||||||
glog.V(2).Infof("dir %s: %v perm: %v", dir.Path, dir.attributes, os.FileMode(dir.attributes.FileMode))
|
attr.Inode = filer2.FullPath(dir.Path).AsInode()
|
||||||
|
attr.Mode = os.FileMode(dir.entry.Attributes.FileMode) | os.ModeDir
|
||||||
|
attr.Mtime = time.Unix(dir.entry.Attributes.Mtime, 0)
|
||||||
|
attr.Ctime = time.Unix(dir.entry.Attributes.Crtime, 0)
|
||||||
|
attr.Gid = dir.entry.Attributes.Gid
|
||||||
|
attr.Uid = dir.entry.Attributes.Uid
|
||||||
|
|
||||||
attr.Mode = os.FileMode(dir.attributes.FileMode) | os.ModeDir
|
glog.V(3).Infof("dir Attr %s, attr: %+v", dir.Path, attr)
|
||||||
|
|
||||||
attr.Mtime = time.Unix(dir.attributes.Mtime, 0)
|
|
||||||
attr.Ctime = time.Unix(dir.attributes.Crtime, 0)
|
|
||||||
attr.Gid = dir.attributes.Gid
|
|
||||||
attr.Uid = dir.attributes.Uid
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) Getxattr(ctx context.Context, req *fuse.GetxattrRequest, resp *fuse.GetxattrResponse) error {
|
||||||
|
|
||||||
|
glog.V(4).Infof("dir Getxattr %s", dir.Path)
|
||||||
|
|
||||||
|
if err := dir.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return getxattr(dir.entry, req, resp)
|
||||||
|
}
|
||||||
|
|
||||||
func (dir *Dir) setRootDirAttributes(attr *fuse.Attr) {
|
func (dir *Dir) setRootDirAttributes(attr *fuse.Attr) {
|
||||||
|
attr.Inode = 1 // filer2.FullPath(dir.Path).AsInode()
|
||||||
|
attr.Valid = time.Hour
|
||||||
attr.Uid = dir.wfs.option.MountUid
|
attr.Uid = dir.wfs.option.MountUid
|
||||||
attr.Gid = dir.wfs.option.MountGid
|
attr.Gid = dir.wfs.option.MountGid
|
||||||
attr.Mode = dir.wfs.option.MountMode
|
attr.Mode = dir.wfs.option.MountMode
|
||||||
@ -78,16 +84,25 @@ func (dir *Dir) setRootDirAttributes(attr *fuse.Attr) {
|
|||||||
attr.Ctime = dir.wfs.option.MountCtime
|
attr.Ctime = dir.wfs.option.MountCtime
|
||||||
attr.Mtime = dir.wfs.option.MountMtime
|
attr.Mtime = dir.wfs.option.MountMtime
|
||||||
attr.Atime = dir.wfs.option.MountMtime
|
attr.Atime = dir.wfs.option.MountMtime
|
||||||
|
attr.BlockSize = 1024 * 1024
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dir *Dir) newFile(name string, entry *filer_pb.Entry) *File {
|
func (dir *Dir) newFile(name string, entry *filer_pb.Entry) fs.Node {
|
||||||
return &File{
|
return dir.wfs.getNode(filer2.NewFullPath(dir.Path, name), func() fs.Node {
|
||||||
Name: name,
|
return &File{
|
||||||
dir: dir,
|
Name: name,
|
||||||
wfs: dir.wfs,
|
dir: dir,
|
||||||
entry: entry,
|
wfs: dir.wfs,
|
||||||
entryViewCache: nil,
|
entry: entry,
|
||||||
}
|
entryViewCache: nil,
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) newDirectory(fullpath filer2.FullPath, entry *filer_pb.Entry) fs.Node {
|
||||||
|
return dir.wfs.getNode(fullpath, func() fs.Node {
|
||||||
|
return &Dir{Path: string(fullpath), wfs: dir.wfs, entry: entry}
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dir *Dir) Create(ctx context.Context, req *fuse.CreateRequest,
|
func (dir *Dir) Create(ctx context.Context, req *fuse.CreateRequest,
|
||||||
@ -109,92 +124,102 @@ func (dir *Dir) Create(ctx context.Context, req *fuse.CreateRequest,
|
|||||||
TtlSec: dir.wfs.option.TtlSec,
|
TtlSec: dir.wfs.option.TtlSec,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
OExcl: req.Flags&fuse.OpenExclusive != 0,
|
||||||
}
|
}
|
||||||
glog.V(1).Infof("create: %v", request)
|
glog.V(1).Infof("create: %v", req.String())
|
||||||
|
|
||||||
if request.Entry.IsDirectory {
|
if err := dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
if err := dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
if err := filer_pb.CreateEntry(ctx, client, request); err != nil {
|
||||||
if _, err := client.CreateEntry(ctx, request); err != nil {
|
if strings.Contains(err.Error(), "EEXIST") {
|
||||||
glog.V(0).Infof("create %s/%s: %v", dir.Path, req.Name, err)
|
return fuse.EEXIST
|
||||||
return fuse.EIO
|
|
||||||
}
|
}
|
||||||
return nil
|
return fuse.EIO
|
||||||
}); err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
}
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
var node fs.Node
|
||||||
|
if request.Entry.IsDirectory {
|
||||||
|
node = dir.newDirectory(filer2.NewFullPath(dir.Path, req.Name), request.Entry)
|
||||||
|
return node, nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
file := dir.newFile(req.Name, request.Entry)
|
node = dir.newFile(req.Name, request.Entry)
|
||||||
if !request.Entry.IsDirectory {
|
file := node.(*File)
|
||||||
file.isOpen = true
|
file.isOpen++
|
||||||
}
|
|
||||||
fh := dir.wfs.AcquireHandle(file, req.Uid, req.Gid)
|
fh := dir.wfs.AcquireHandle(file, req.Uid, req.Gid)
|
||||||
fh.dirtyMetadata = true
|
|
||||||
return file, fh, nil
|
return file, fh, nil
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dir *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fs.Node, error) {
|
func (dir *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fs.Node, error) {
|
||||||
|
|
||||||
err := dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
newEntry := &filer_pb.Entry{
|
||||||
|
Name: req.Name,
|
||||||
|
IsDirectory: true,
|
||||||
|
Attributes: &filer_pb.FuseAttributes{
|
||||||
|
Mtime: time.Now().Unix(),
|
||||||
|
Crtime: time.Now().Unix(),
|
||||||
|
FileMode: uint32(req.Mode &^ dir.wfs.option.Umask),
|
||||||
|
Uid: req.Uid,
|
||||||
|
Gid: req.Gid,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.CreateEntryRequest{
|
request := &filer_pb.CreateEntryRequest{
|
||||||
Directory: dir.Path,
|
Directory: dir.Path,
|
||||||
Entry: &filer_pb.Entry{
|
Entry: newEntry,
|
||||||
Name: req.Name,
|
|
||||||
IsDirectory: true,
|
|
||||||
Attributes: &filer_pb.FuseAttributes{
|
|
||||||
Mtime: time.Now().Unix(),
|
|
||||||
Crtime: time.Now().Unix(),
|
|
||||||
FileMode: uint32(req.Mode &^ dir.wfs.option.Umask),
|
|
||||||
Uid: req.Uid,
|
|
||||||
Gid: req.Gid,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(1).Infof("mkdir: %v", request)
|
glog.V(1).Infof("mkdir: %v", request)
|
||||||
if _, err := client.CreateEntry(ctx, request); err != nil {
|
if err := filer_pb.CreateEntry(ctx, client, request); err != nil {
|
||||||
glog.V(0).Infof("mkdir %s/%s: %v", dir.Path, req.Name, err)
|
glog.V(0).Infof("mkdir %s/%s: %v", dir.Path, req.Name, err)
|
||||||
return fuse.EIO
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
if err == nil {
|
if err == nil {
|
||||||
node := &Dir{Path: path.Join(dir.Path, req.Name), wfs: dir.wfs}
|
node := dir.newDirectory(filer2.NewFullPath(dir.Path, req.Name), newEntry)
|
||||||
return node, nil
|
return node, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil, err
|
return nil, fuse.EIO
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dir *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fs.Node, err error) {
|
func (dir *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fs.Node, err error) {
|
||||||
|
|
||||||
var entry *filer_pb.Entry
|
glog.V(4).Infof("dir Lookup %s: %s", dir.Path, req.Name)
|
||||||
fullFilePath := path.Join(dir.Path, req.Name)
|
|
||||||
|
|
||||||
item := dir.wfs.listDirectoryEntriesCache.Get(fullFilePath)
|
fullFilePath := filer2.NewFullPath(dir.Path, req.Name)
|
||||||
if item != nil && !item.Expired() {
|
entry := dir.wfs.cacheGet(fullFilePath)
|
||||||
entry = item.Value().(*filer_pb.Entry)
|
|
||||||
}
|
|
||||||
|
|
||||||
if entry == nil {
|
if entry == nil {
|
||||||
|
// glog.V(3).Infof("dir Lookup cache miss %s", fullFilePath)
|
||||||
entry, err = filer2.GetEntry(ctx, dir.wfs, fullFilePath)
|
entry, err = filer2.GetEntry(ctx, dir.wfs, fullFilePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
glog.V(1).Infof("dir GetEntry %s: %v", fullFilePath, err)
|
||||||
|
return nil, fuse.ENOENT
|
||||||
}
|
}
|
||||||
|
dir.wfs.cacheSet(fullFilePath, entry, 5*time.Minute)
|
||||||
|
} else {
|
||||||
|
glog.V(4).Infof("dir Lookup cache hit %s", fullFilePath)
|
||||||
}
|
}
|
||||||
|
|
||||||
if entry != nil {
|
if entry != nil {
|
||||||
if entry.IsDirectory {
|
if entry.IsDirectory {
|
||||||
node = &Dir{Path: path.Join(dir.Path, req.Name), wfs: dir.wfs, attributes: entry.Attributes}
|
node = dir.newDirectory(fullFilePath, entry)
|
||||||
} else {
|
} else {
|
||||||
node = dir.newFile(req.Name, entry)
|
node = dir.newFile(req.Name, entry)
|
||||||
}
|
}
|
||||||
|
|
||||||
resp.EntryValid = time.Duration(0)
|
// resp.EntryValid = time.Second
|
||||||
|
resp.Attr.Inode = fullFilePath.AsInode()
|
||||||
|
resp.Attr.Valid = time.Second
|
||||||
resp.Attr.Mtime = time.Unix(entry.Attributes.Mtime, 0)
|
resp.Attr.Mtime = time.Unix(entry.Attributes.Mtime, 0)
|
||||||
resp.Attr.Ctime = time.Unix(entry.Attributes.Crtime, 0)
|
resp.Attr.Ctime = time.Unix(entry.Attributes.Crtime, 0)
|
||||||
resp.Attr.Mode = os.FileMode(entry.Attributes.FileMode)
|
resp.Attr.Mode = os.FileMode(entry.Attributes.FileMode)
|
||||||
@ -204,57 +229,32 @@ func (dir *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.
|
|||||||
return node, nil
|
return node, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
glog.V(1).Infof("not found dir GetEntry %s: %v", fullFilePath, err)
|
||||||
return nil, fuse.ENOENT
|
return nil, fuse.ENOENT
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dir *Dir) ReadDirAll(ctx context.Context) (ret []fuse.Dirent, err error) {
|
func (dir *Dir) ReadDirAll(ctx context.Context) (ret []fuse.Dirent, err error) {
|
||||||
|
|
||||||
err = dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
glog.V(3).Infof("dir ReadDirAll %s", dir.Path)
|
||||||
|
|
||||||
paginationLimit := 1024
|
cacheTtl := 5 * time.Minute
|
||||||
remaining := dir.wfs.option.DirListingLimit
|
|
||||||
|
|
||||||
lastEntryName := ""
|
|
||||||
|
|
||||||
for remaining >= 0 {
|
|
||||||
|
|
||||||
request := &filer_pb.ListEntriesRequest{
|
|
||||||
Directory: dir.Path,
|
|
||||||
StartFromFileName: lastEntryName,
|
|
||||||
Limit: uint32(paginationLimit),
|
|
||||||
}
|
|
||||||
|
|
||||||
glog.V(4).Infof("read directory: %v", request)
|
|
||||||
resp, err := client.ListEntries(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
glog.V(0).Infof("list %s: %v", dir.Path, err)
|
|
||||||
return fuse.EIO
|
|
||||||
}
|
|
||||||
|
|
||||||
cacheTtl := estimatedCacheTtl(len(resp.Entries))
|
|
||||||
|
|
||||||
for _, entry := range resp.Entries {
|
|
||||||
if entry.IsDirectory {
|
|
||||||
dirent := fuse.Dirent{Name: entry.Name, Type: fuse.DT_Dir}
|
|
||||||
ret = append(ret, dirent)
|
|
||||||
} else {
|
|
||||||
dirent := fuse.Dirent{Name: entry.Name, Type: fuse.DT_File}
|
|
||||||
ret = append(ret, dirent)
|
|
||||||
}
|
|
||||||
dir.wfs.listDirectoryEntriesCache.Set(path.Join(dir.Path, entry.Name), entry, cacheTtl)
|
|
||||||
lastEntryName = entry.Name
|
|
||||||
}
|
|
||||||
|
|
||||||
remaining -= len(resp.Entries)
|
|
||||||
|
|
||||||
if len(resp.Entries) < paginationLimit {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
|
readErr := filer2.ReadDirAllEntries(ctx, dir.wfs, filer2.FullPath(dir.Path), "", func(entry *filer_pb.Entry, isLast bool) {
|
||||||
|
fullpath := filer2.NewFullPath(dir.Path, entry.Name)
|
||||||
|
inode := fullpath.AsInode()
|
||||||
|
if entry.IsDirectory {
|
||||||
|
dirent := fuse.Dirent{Inode: inode, Name: entry.Name, Type: fuse.DT_Dir}
|
||||||
|
ret = append(ret, dirent)
|
||||||
|
} else {
|
||||||
|
dirent := fuse.Dirent{Inode: inode, Name: entry.Name, Type: fuse.DT_File}
|
||||||
|
ret = append(ret, dirent)
|
||||||
}
|
}
|
||||||
|
dir.wfs.cacheSet(fullpath, entry, cacheTtl)
|
||||||
return nil
|
|
||||||
})
|
})
|
||||||
|
if readErr != nil {
|
||||||
|
glog.V(0).Infof("list %s: %v", dir.Path, err)
|
||||||
|
return ret, fuse.EIO
|
||||||
|
}
|
||||||
|
|
||||||
return ret, err
|
return ret, err
|
||||||
}
|
}
|
||||||
@ -271,14 +271,17 @@ func (dir *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) error {
|
|||||||
|
|
||||||
func (dir *Dir) removeOneFile(ctx context.Context, req *fuse.RemoveRequest) error {
|
func (dir *Dir) removeOneFile(ctx context.Context, req *fuse.RemoveRequest) error {
|
||||||
|
|
||||||
entry, err := filer2.GetEntry(ctx, dir.wfs, path.Join(dir.Path, req.Name))
|
filePath := filer2.NewFullPath(dir.Path, req.Name)
|
||||||
|
entry, err := filer2.GetEntry(ctx, dir.wfs, filePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
dir.wfs.deleteFileChunks(ctx, entry.Chunks)
|
dir.wfs.deleteFileChunks(ctx, entry.Chunks)
|
||||||
|
|
||||||
return dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
dir.wfs.cacheDelete(filePath)
|
||||||
|
|
||||||
|
return dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.DeleteEntryRequest{
|
request := &filer_pb.DeleteEntryRequest{
|
||||||
Directory: dir.Path,
|
Directory: dir.Path,
|
||||||
@ -289,12 +292,10 @@ func (dir *Dir) removeOneFile(ctx context.Context, req *fuse.RemoveRequest) erro
|
|||||||
glog.V(3).Infof("remove file: %v", request)
|
glog.V(3).Infof("remove file: %v", request)
|
||||||
_, err := client.DeleteEntry(ctx, request)
|
_, err := client.DeleteEntry(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.V(3).Infof("remove file %s/%s: %v", dir.Path, req.Name, err)
|
glog.V(3).Infof("not found remove file %s/%s: %v", dir.Path, req.Name, err)
|
||||||
return fuse.ENOENT
|
return fuse.ENOENT
|
||||||
}
|
}
|
||||||
|
|
||||||
dir.wfs.listDirectoryEntriesCache.Delete(path.Join(dir.Path, req.Name))
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
@ -302,7 +303,9 @@ func (dir *Dir) removeOneFile(ctx context.Context, req *fuse.RemoveRequest) erro
|
|||||||
|
|
||||||
func (dir *Dir) removeFolder(ctx context.Context, req *fuse.RemoveRequest) error {
|
func (dir *Dir) removeFolder(ctx context.Context, req *fuse.RemoveRequest) error {
|
||||||
|
|
||||||
return dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
dir.wfs.cacheDelete(filer2.NewFullPath(dir.Path, req.Name))
|
||||||
|
|
||||||
|
return dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.DeleteEntryRequest{
|
request := &filer_pb.DeleteEntryRequest{
|
||||||
Directory: dir.Path,
|
Directory: dir.Path,
|
||||||
@ -313,12 +316,10 @@ func (dir *Dir) removeFolder(ctx context.Context, req *fuse.RemoveRequest) error
|
|||||||
glog.V(3).Infof("remove directory entry: %v", request)
|
glog.V(3).Infof("remove directory entry: %v", request)
|
||||||
_, err := client.DeleteEntry(ctx, request)
|
_, err := client.DeleteEntry(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.V(3).Infof("remove %s/%s: %v", dir.Path, req.Name, err)
|
glog.V(3).Infof("not found remove %s/%s: %v", dir.Path, req.Name, err)
|
||||||
return fuse.ENOENT
|
return fuse.ENOENT
|
||||||
}
|
}
|
||||||
|
|
||||||
dir.wfs.listDirectoryEntriesCache.Delete(path.Join(dir.Path, req.Name))
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
@ -326,66 +327,122 @@ func (dir *Dir) removeFolder(ctx context.Context, req *fuse.RemoveRequest) error
|
|||||||
|
|
||||||
func (dir *Dir) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) error {
|
func (dir *Dir) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) error {
|
||||||
|
|
||||||
if dir.attributes == nil {
|
glog.V(3).Infof("%v dir setattr %+v", dir.Path, req)
|
||||||
return nil
|
|
||||||
|
if err := dir.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(3).Infof("%v dir setattr %+v, fh=%d", dir.Path, req, req.Handle)
|
|
||||||
if req.Valid.Mode() {
|
if req.Valid.Mode() {
|
||||||
dir.attributes.FileMode = uint32(req.Mode)
|
dir.entry.Attributes.FileMode = uint32(req.Mode)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Valid.Uid() {
|
if req.Valid.Uid() {
|
||||||
dir.attributes.Uid = req.Uid
|
dir.entry.Attributes.Uid = req.Uid
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Valid.Gid() {
|
if req.Valid.Gid() {
|
||||||
dir.attributes.Gid = req.Gid
|
dir.entry.Attributes.Gid = req.Gid
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Valid.Mtime() {
|
if req.Valid.Mtime() {
|
||||||
dir.attributes.Mtime = req.Mtime.Unix()
|
dir.entry.Attributes.Mtime = req.Mtime.Unix()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
dir.wfs.cacheDelete(filer2.FullPath(dir.Path))
|
||||||
|
|
||||||
|
return dir.saveEntry(ctx)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) Setxattr(ctx context.Context, req *fuse.SetxattrRequest) error {
|
||||||
|
|
||||||
|
glog.V(4).Infof("dir Setxattr %s: %s", dir.Path, req.Name)
|
||||||
|
|
||||||
|
if err := dir.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := setxattr(dir.entry, req); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dir.wfs.cacheDelete(filer2.FullPath(dir.Path))
|
||||||
|
|
||||||
|
return dir.saveEntry(ctx)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) Removexattr(ctx context.Context, req *fuse.RemovexattrRequest) error {
|
||||||
|
|
||||||
|
glog.V(4).Infof("dir Removexattr %s: %s", dir.Path, req.Name)
|
||||||
|
|
||||||
|
if err := dir.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := removexattr(dir.entry, req); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dir.wfs.cacheDelete(filer2.FullPath(dir.Path))
|
||||||
|
|
||||||
|
return dir.saveEntry(ctx)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) Listxattr(ctx context.Context, req *fuse.ListxattrRequest, resp *fuse.ListxattrResponse) error {
|
||||||
|
|
||||||
|
glog.V(4).Infof("dir Listxattr %s", dir.Path)
|
||||||
|
|
||||||
|
if err := dir.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := listxattr(dir.entry, req, resp); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) Forget() {
|
||||||
|
glog.V(3).Infof("Forget dir %s", dir.Path)
|
||||||
|
|
||||||
|
dir.wfs.forgetNode(filer2.FullPath(dir.Path))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) maybeLoadEntry(ctx context.Context) error {
|
||||||
|
if dir.entry == nil {
|
||||||
|
parentDirPath, name := filer2.FullPath(dir.Path).DirAndName()
|
||||||
|
entry, err := dir.wfs.maybeLoadEntry(ctx, parentDirPath, name)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
dir.entry = entry
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dir *Dir) saveEntry(ctx context.Context) error {
|
||||||
|
|
||||||
parentDir, name := filer2.FullPath(dir.Path).DirAndName()
|
parentDir, name := filer2.FullPath(dir.Path).DirAndName()
|
||||||
return dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
|
||||||
|
return dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.UpdateEntryRequest{
|
request := &filer_pb.UpdateEntryRequest{
|
||||||
Directory: parentDir,
|
Directory: parentDir,
|
||||||
Entry: &filer_pb.Entry{
|
Entry: dir.entry,
|
||||||
Name: name,
|
|
||||||
Attributes: dir.attributes,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(1).Infof("set attr directory entry: %v", request)
|
glog.V(1).Infof("save dir entry: %v", request)
|
||||||
_, err := client.UpdateEntry(ctx, request)
|
_, err := client.UpdateEntry(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.V(0).Infof("UpdateEntry %s: %v", dir.Path, err)
|
glog.V(0).Infof("UpdateEntry dir %s/%s: %v", parentDir, name, err)
|
||||||
return fuse.EIO
|
return fuse.EIO
|
||||||
}
|
}
|
||||||
|
|
||||||
dir.wfs.listDirectoryEntriesCache.Delete(dir.Path)
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func estimatedCacheTtl(numEntries int) time.Duration {
|
|
||||||
if numEntries < 100 {
|
|
||||||
// 30 ms per entry
|
|
||||||
return 3 * time.Second
|
|
||||||
}
|
|
||||||
if numEntries < 1000 {
|
|
||||||
// 10 ms per entry
|
|
||||||
return 10 * time.Second
|
|
||||||
}
|
|
||||||
if numEntries < 10000 {
|
|
||||||
// 10 ms per entry
|
|
||||||
return 100 * time.Second
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2 ms per entry
|
|
||||||
return time.Duration(numEntries*2) * time.Millisecond
|
|
||||||
}
|
}
|
||||||
|
@ -35,8 +35,8 @@ func (dir *Dir) Symlink(ctx context.Context, req *fuse.SymlinkRequest) (fs.Node,
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
err := dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err := dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
if _, err := client.CreateEntry(ctx, request); err != nil {
|
if err := filer_pb.CreateEntry(ctx, client, request); err != nil {
|
||||||
glog.V(0).Infof("symlink %s/%s: %v", dir.Path, req.NewName, err)
|
glog.V(0).Infof("symlink %s/%s: %v", dir.Path, req.NewName, err)
|
||||||
return fuse.EIO
|
return fuse.EIO
|
||||||
}
|
}
|
||||||
@ -51,7 +51,7 @@ func (dir *Dir) Symlink(ctx context.Context, req *fuse.SymlinkRequest) (fs.Node,
|
|||||||
|
|
||||||
func (file *File) Readlink(ctx context.Context, req *fuse.ReadlinkRequest) (string, error) {
|
func (file *File) Readlink(ctx context.Context, req *fuse.ReadlinkRequest) (string, error) {
|
||||||
|
|
||||||
if err := file.maybeLoadAttributes(ctx); err != nil {
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2,7 +2,9 @@ package filesys
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
"github.com/seaweedfs/fuse"
|
"github.com/seaweedfs/fuse"
|
||||||
"github.com/seaweedfs/fuse/fs"
|
"github.com/seaweedfs/fuse/fs"
|
||||||
@ -11,8 +13,9 @@ import (
|
|||||||
func (dir *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDirectory fs.Node) error {
|
func (dir *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDirectory fs.Node) error {
|
||||||
|
|
||||||
newDir := newDirectory.(*Dir)
|
newDir := newDirectory.(*Dir)
|
||||||
|
glog.V(4).Infof("dir Rename %s/%s => %s/%s", dir.Path, req.OldName, newDir.Path, req.NewName)
|
||||||
|
|
||||||
return dir.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err := dir.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.AtomicRenameEntryRequest{
|
request := &filer_pb.AtomicRenameEntryRequest{
|
||||||
OldDirectory: dir.Path,
|
OldDirectory: dir.Path,
|
||||||
@ -23,11 +26,38 @@ func (dir *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDirector
|
|||||||
|
|
||||||
_, err := client.AtomicRenameEntry(ctx, request)
|
_, err := client.AtomicRenameEntry(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("renaming %s/%s => %s/%s: %v", dir.Path, req.OldName, newDir.Path, req.NewName, err)
|
glog.V(0).Infof("dir Rename %s/%s => %s/%s : %v", dir.Path, req.OldName, newDir.Path, req.NewName, err)
|
||||||
|
return fuse.EIO
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
})
|
})
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
newPath := filer2.NewFullPath(newDir.Path, req.NewName)
|
||||||
|
oldPath := filer2.NewFullPath(dir.Path, req.OldName)
|
||||||
|
dir.wfs.cacheDelete(newPath)
|
||||||
|
dir.wfs.cacheDelete(oldPath)
|
||||||
|
|
||||||
|
oldFileNode := dir.wfs.getNode(oldPath, func() fs.Node {
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
newDirNode := dir.wfs.getNode(filer2.FullPath(dir.Path), func() fs.Node {
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
dir.wfs.forgetNode(newPath)
|
||||||
|
dir.wfs.forgetNode(oldPath)
|
||||||
|
if oldFileNode != nil && newDirNode != nil {
|
||||||
|
oldFile := oldFileNode.(*File)
|
||||||
|
oldFile.Name = req.NewName
|
||||||
|
oldFile.dir = newDirNode.(*Dir)
|
||||||
|
dir.wfs.getNode(newPath, func() fs.Node {
|
||||||
|
return oldFile
|
||||||
|
})
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
@ -4,8 +4,8 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
@ -15,28 +15,19 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type ContinuousDirtyPages struct {
|
type ContinuousDirtyPages struct {
|
||||||
hasData bool
|
intervals *ContinuousIntervals
|
||||||
Offset int64
|
f *File
|
||||||
Size int64
|
lock sync.Mutex
|
||||||
Data []byte
|
|
||||||
f *File
|
|
||||||
lock sync.Mutex
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func newDirtyPages(file *File) *ContinuousDirtyPages {
|
func newDirtyPages(file *File) *ContinuousDirtyPages {
|
||||||
return &ContinuousDirtyPages{
|
return &ContinuousDirtyPages{
|
||||||
Data: nil,
|
intervals: &ContinuousIntervals{},
|
||||||
f: file,
|
f: file,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (pages *ContinuousDirtyPages) releaseResource() {
|
func (pages *ContinuousDirtyPages) releaseResource() {
|
||||||
if pages.Data != nil {
|
|
||||||
pages.f.wfs.bufPool.Put(pages.Data)
|
|
||||||
pages.Data = nil
|
|
||||||
atomic.AddInt32(&counter, -1)
|
|
||||||
glog.V(3).Infof("%s/%s releasing resource %d", pages.f.dir.Path, pages.f.Name, counter)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var counter = int32(0)
|
var counter = int32(0)
|
||||||
@ -46,82 +37,44 @@ func (pages *ContinuousDirtyPages) AddPage(ctx context.Context, offset int64, da
|
|||||||
pages.lock.Lock()
|
pages.lock.Lock()
|
||||||
defer pages.lock.Unlock()
|
defer pages.lock.Unlock()
|
||||||
|
|
||||||
var chunk *filer_pb.FileChunk
|
glog.V(3).Infof("%s AddPage [%d,%d)", pages.f.fullpath(), offset, offset+int64(len(data)))
|
||||||
|
|
||||||
if len(data) > int(pages.f.wfs.option.ChunkSizeLimit) {
|
if len(data) > int(pages.f.wfs.option.ChunkSizeLimit) {
|
||||||
// this is more than what buffer can hold.
|
// this is more than what buffer can hold.
|
||||||
return pages.flushAndSave(ctx, offset, data)
|
return pages.flushAndSave(ctx, offset, data)
|
||||||
}
|
}
|
||||||
|
|
||||||
if pages.Data == nil {
|
pages.intervals.AddInterval(data, offset)
|
||||||
pages.Data = pages.f.wfs.bufPool.Get().([]byte)
|
|
||||||
atomic.AddInt32(&counter, 1)
|
|
||||||
glog.V(3).Infof("%s/%s acquire resource %d", pages.f.dir.Path, pages.f.Name, counter)
|
|
||||||
}
|
|
||||||
|
|
||||||
if offset < pages.Offset || offset >= pages.Offset+int64(len(pages.Data)) ||
|
var chunk *filer_pb.FileChunk
|
||||||
pages.Offset+int64(len(pages.Data)) < offset+int64(len(data)) {
|
var hasSavedData bool
|
||||||
// if the data is out of range,
|
|
||||||
// or buffer is full if adding new data,
|
|
||||||
// flush current buffer and add new data
|
|
||||||
|
|
||||||
// println("offset", offset, "size", len(data), "existing offset", pages.Offset, "size", pages.Size)
|
if pages.intervals.TotalSize() > pages.f.wfs.option.ChunkSizeLimit {
|
||||||
|
chunk, hasSavedData, err = pages.saveExistingLargestPageToStorage(ctx)
|
||||||
if chunk, err = pages.saveExistingPagesToStorage(ctx); err == nil {
|
if hasSavedData {
|
||||||
if chunk != nil {
|
chunks = append(chunks, chunk)
|
||||||
glog.V(4).Infof("%s/%s add save [%d,%d)", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size))
|
|
||||||
chunks = append(chunks, chunk)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
glog.V(0).Infof("%s/%s add save [%d,%d): %v", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size), err)
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
pages.Offset = offset
|
|
||||||
copy(pages.Data, data)
|
|
||||||
pages.Size = int64(len(data))
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if offset != pages.Offset+pages.Size {
|
|
||||||
// when this happens, debug shows the data overlapping with existing data is empty
|
|
||||||
// the data is not just append
|
|
||||||
if offset == pages.Offset && int(pages.Size) < len(data) {
|
|
||||||
// glog.V(2).Infof("pages[%d,%d) pages.Data len=%v, data len=%d, pages.Size=%d", pages.Offset, pages.Offset+pages.Size, len(pages.Data), len(data), pages.Size)
|
|
||||||
copy(pages.Data[pages.Size:], data[pages.Size:])
|
|
||||||
} else {
|
|
||||||
if pages.Size != 0 {
|
|
||||||
glog.V(1).Infof("%s/%s add page: pages [%d, %d) write [%d, %d)", pages.f.dir.Path, pages.f.Name, pages.Offset, pages.Offset+pages.Size, offset, offset+int64(len(data)))
|
|
||||||
}
|
|
||||||
return pages.flushAndSave(ctx, offset, data)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
copy(pages.Data[offset-pages.Offset:], data)
|
|
||||||
}
|
|
||||||
|
|
||||||
pages.Size = max(pages.Size, offset+int64(len(data))-pages.Offset)
|
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func (pages *ContinuousDirtyPages) flushAndSave(ctx context.Context, offset int64, data []byte) (chunks []*filer_pb.FileChunk, err error) {
|
func (pages *ContinuousDirtyPages) flushAndSave(ctx context.Context, offset int64, data []byte) (chunks []*filer_pb.FileChunk, err error) {
|
||||||
|
|
||||||
var chunk *filer_pb.FileChunk
|
var chunk *filer_pb.FileChunk
|
||||||
|
var newChunks []*filer_pb.FileChunk
|
||||||
|
|
||||||
// flush existing
|
// flush existing
|
||||||
if chunk, err = pages.saveExistingPagesToStorage(ctx); err == nil {
|
if newChunks, err = pages.saveExistingPagesToStorage(ctx); err == nil {
|
||||||
if chunk != nil {
|
if newChunks != nil {
|
||||||
glog.V(4).Infof("%s/%s flush existing [%d,%d) to %s", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size), chunk.FileId)
|
chunks = append(chunks, newChunks...)
|
||||||
chunks = append(chunks, chunk)
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
glog.V(0).Infof("%s/%s failed to flush1 [%d,%d): %v", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size), err)
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
pages.Size = 0
|
|
||||||
pages.Offset = 0
|
|
||||||
|
|
||||||
// flush the new page
|
// flush the new page
|
||||||
if chunk, err = pages.saveToStorage(ctx, data, offset); err == nil {
|
if chunk, err = pages.saveToStorage(ctx, bytes.NewReader(data), offset, int64(len(data))); err == nil {
|
||||||
if chunk != nil {
|
if chunk != nil {
|
||||||
glog.V(4).Infof("%s/%s flush big request [%d,%d) to %s", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size), chunk.FileId)
|
glog.V(4).Infof("%s/%s flush big request [%d,%d) to %s", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size), chunk.FileId)
|
||||||
chunks = append(chunks, chunk)
|
chunks = append(chunks, chunk)
|
||||||
@ -134,40 +87,60 @@ func (pages *ContinuousDirtyPages) flushAndSave(ctx context.Context, offset int6
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func (pages *ContinuousDirtyPages) FlushToStorage(ctx context.Context) (chunk *filer_pb.FileChunk, err error) {
|
func (pages *ContinuousDirtyPages) FlushToStorage(ctx context.Context) (chunks []*filer_pb.FileChunk, err error) {
|
||||||
|
|
||||||
pages.lock.Lock()
|
pages.lock.Lock()
|
||||||
defer pages.lock.Unlock()
|
defer pages.lock.Unlock()
|
||||||
|
|
||||||
if pages.Size == 0 {
|
return pages.saveExistingPagesToStorage(ctx)
|
||||||
return nil, nil
|
}
|
||||||
}
|
|
||||||
|
|
||||||
if chunk, err = pages.saveExistingPagesToStorage(ctx); err == nil {
|
func (pages *ContinuousDirtyPages) saveExistingPagesToStorage(ctx context.Context) (chunks []*filer_pb.FileChunk, err error) {
|
||||||
pages.Size = 0
|
|
||||||
pages.Offset = 0
|
var hasSavedData bool
|
||||||
if chunk != nil {
|
var chunk *filer_pb.FileChunk
|
||||||
glog.V(4).Infof("%s/%s flush [%d,%d)", pages.f.dir.Path, pages.f.Name, chunk.Offset, chunk.Offset+int64(chunk.Size))
|
|
||||||
|
for {
|
||||||
|
|
||||||
|
chunk, hasSavedData, err = pages.saveExistingLargestPageToStorage(ctx)
|
||||||
|
if !hasSavedData {
|
||||||
|
return chunks, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
chunks = append(chunks, chunk)
|
||||||
|
} else {
|
||||||
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pages *ContinuousDirtyPages) saveExistingLargestPageToStorage(ctx context.Context) (chunk *filer_pb.FileChunk, hasSavedData bool, err error) {
|
||||||
|
|
||||||
|
maxList := pages.intervals.RemoveLargestIntervalLinkedList()
|
||||||
|
if maxList == nil {
|
||||||
|
return nil, false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
chunk, err = pages.saveToStorage(ctx, maxList.ToReader(), maxList.Offset(), maxList.Size())
|
||||||
|
if err == nil {
|
||||||
|
hasSavedData = true
|
||||||
|
glog.V(3).Infof("%s saveToStorage [%d,%d) %s", pages.f.fullpath(), maxList.Offset(), maxList.Offset()+maxList.Size(), chunk.FileId)
|
||||||
|
} else {
|
||||||
|
glog.V(0).Infof("%s saveToStorage [%d,%d): %v", pages.f.fullpath(), maxList.Offset(), maxList.Offset()+maxList.Size(), err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func (pages *ContinuousDirtyPages) saveExistingPagesToStorage(ctx context.Context) (*filer_pb.FileChunk, error) {
|
func (pages *ContinuousDirtyPages) saveToStorage(ctx context.Context, reader io.Reader, offset int64, size int64) (*filer_pb.FileChunk, error) {
|
||||||
|
|
||||||
if pages.Size == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return pages.saveToStorage(ctx, pages.Data[:pages.Size], pages.Offset)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pages *ContinuousDirtyPages) saveToStorage(ctx context.Context, buf []byte, offset int64) (*filer_pb.FileChunk, error) {
|
|
||||||
|
|
||||||
var fileId, host string
|
var fileId, host string
|
||||||
var auth security.EncodedJwt
|
var auth security.EncodedJwt
|
||||||
|
|
||||||
if err := pages.f.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
if err := pages.f.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.AssignVolumeRequest{
|
request := &filer_pb.AssignVolumeRequest{
|
||||||
Count: 1,
|
Count: 1,
|
||||||
@ -191,8 +164,7 @@ func (pages *ContinuousDirtyPages) saveToStorage(ctx context.Context, buf []byte
|
|||||||
}
|
}
|
||||||
|
|
||||||
fileUrl := fmt.Sprintf("http://%s/%s", host, fileId)
|
fileUrl := fmt.Sprintf("http://%s/%s", host, fileId)
|
||||||
bufReader := bytes.NewReader(buf)
|
uploadResult, err := operation.Upload(fileUrl, pages.f.Name, reader, false, "", nil, auth)
|
||||||
uploadResult, err := operation.Upload(fileUrl, pages.f.Name, bufReader, false, "application/octet-stream", nil, auth)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.V(0).Infof("upload data %v to %s: %v", pages.f.Name, fileUrl, err)
|
glog.V(0).Infof("upload data %v to %s: %v", pages.f.Name, fileUrl, err)
|
||||||
return nil, fmt.Errorf("upload data: %v", err)
|
return nil, fmt.Errorf("upload data: %v", err)
|
||||||
@ -205,7 +177,7 @@ func (pages *ContinuousDirtyPages) saveToStorage(ctx context.Context, buf []byte
|
|||||||
return &filer_pb.FileChunk{
|
return &filer_pb.FileChunk{
|
||||||
FileId: fileId,
|
FileId: fileId,
|
||||||
Offset: offset,
|
Offset: offset,
|
||||||
Size: uint64(len(buf)),
|
Size: uint64(size),
|
||||||
Mtime: time.Now().UnixNano(),
|
Mtime: time.Now().UnixNano(),
|
||||||
ETag: uploadResult.ETag,
|
ETag: uploadResult.ETag,
|
||||||
}, nil
|
}, nil
|
||||||
@ -218,3 +190,18 @@ func max(x, y int64) int64 {
|
|||||||
}
|
}
|
||||||
return y
|
return y
|
||||||
}
|
}
|
||||||
|
func min(x, y int64) int64 {
|
||||||
|
if x < y {
|
||||||
|
return x
|
||||||
|
}
|
||||||
|
return y
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pages *ContinuousDirtyPages) ReadDirtyData(ctx context.Context, data []byte, startOffset int64) (offset int64, size int) {
|
||||||
|
|
||||||
|
pages.lock.Lock()
|
||||||
|
defer pages.lock.Unlock()
|
||||||
|
|
||||||
|
return pages.intervals.ReadData(data, startOffset)
|
||||||
|
|
||||||
|
}
|
||||||
|
220
weed/filesys/dirty_page_interval.go
Normal file
220
weed/filesys/dirty_page_interval.go
Normal file
@ -0,0 +1,220 @@
|
|||||||
|
package filesys
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
)
|
||||||
|
|
||||||
|
type IntervalNode struct {
|
||||||
|
Data []byte
|
||||||
|
Offset int64
|
||||||
|
Size int64
|
||||||
|
Next *IntervalNode
|
||||||
|
}
|
||||||
|
|
||||||
|
type IntervalLinkedList struct {
|
||||||
|
Head *IntervalNode
|
||||||
|
Tail *IntervalNode
|
||||||
|
}
|
||||||
|
|
||||||
|
type ContinuousIntervals struct {
|
||||||
|
lists []*IntervalLinkedList
|
||||||
|
}
|
||||||
|
|
||||||
|
func (list *IntervalLinkedList) Offset() int64 {
|
||||||
|
return list.Head.Offset
|
||||||
|
}
|
||||||
|
func (list *IntervalLinkedList) Size() int64 {
|
||||||
|
return list.Tail.Offset + list.Tail.Size - list.Head.Offset
|
||||||
|
}
|
||||||
|
func (list *IntervalLinkedList) addNodeToTail(node *IntervalNode) {
|
||||||
|
// glog.V(4).Infof("add to tail [%d,%d) + [%d,%d) => [%d,%d)", list.Head.Offset, list.Tail.Offset+list.Tail.Size, node.Offset, node.Offset+node.Size, list.Head.Offset, node.Offset+node.Size)
|
||||||
|
list.Tail.Next = node
|
||||||
|
list.Tail = node
|
||||||
|
}
|
||||||
|
func (list *IntervalLinkedList) addNodeToHead(node *IntervalNode) {
|
||||||
|
// glog.V(4).Infof("add to head [%d,%d) + [%d,%d) => [%d,%d)", node.Offset, node.Offset+node.Size, list.Head.Offset, list.Tail.Offset+list.Tail.Size, node.Offset, list.Tail.Offset+list.Tail.Size)
|
||||||
|
node.Next = list.Head
|
||||||
|
list.Head = node
|
||||||
|
}
|
||||||
|
|
||||||
|
func (list *IntervalLinkedList) ReadData(buf []byte, start, stop int64) {
|
||||||
|
t := list.Head
|
||||||
|
for {
|
||||||
|
|
||||||
|
nodeStart, nodeStop := max(start, t.Offset), min(stop, t.Offset+t.Size)
|
||||||
|
if nodeStart < nodeStop {
|
||||||
|
// glog.V(0).Infof("copying start=%d stop=%d t=[%d,%d) t.data=%d => bufSize=%d nodeStart=%d, nodeStop=%d", start, stop, t.Offset, t.Offset+t.Size, len(t.Data), len(buf), nodeStart, nodeStop)
|
||||||
|
copy(buf[nodeStart-start:], t.Data[nodeStart-t.Offset:nodeStop-t.Offset])
|
||||||
|
}
|
||||||
|
|
||||||
|
if t.Next == nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
t = t.Next
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *ContinuousIntervals) TotalSize() (total int64) {
|
||||||
|
for _, list := range c.lists {
|
||||||
|
total += list.Size()
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func subList(list *IntervalLinkedList, start, stop int64) *IntervalLinkedList {
|
||||||
|
var nodes []*IntervalNode
|
||||||
|
for t := list.Head; t != nil; t = t.Next {
|
||||||
|
nodeStart, nodeStop := max(start, t.Offset), min(stop, t.Offset+t.Size)
|
||||||
|
if nodeStart >= nodeStop {
|
||||||
|
// skip non overlapping IntervalNode
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
nodes = append(nodes, &IntervalNode{
|
||||||
|
Data: t.Data[nodeStart-t.Offset : nodeStop-t.Offset],
|
||||||
|
Offset: nodeStart,
|
||||||
|
Size: nodeStop - nodeStart,
|
||||||
|
Next: nil,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for i := 1; i < len(nodes); i++ {
|
||||||
|
nodes[i-1].Next = nodes[i]
|
||||||
|
}
|
||||||
|
return &IntervalLinkedList{
|
||||||
|
Head: nodes[0],
|
||||||
|
Tail: nodes[len(nodes)-1],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *ContinuousIntervals) AddInterval(data []byte, offset int64) {
|
||||||
|
|
||||||
|
interval := &IntervalNode{Data: data, Offset: offset, Size: int64(len(data))}
|
||||||
|
|
||||||
|
var newLists []*IntervalLinkedList
|
||||||
|
for _, list := range c.lists {
|
||||||
|
// if list is to the left of new interval, add to the new list
|
||||||
|
if list.Tail.Offset+list.Tail.Size <= interval.Offset {
|
||||||
|
newLists = append(newLists, list)
|
||||||
|
}
|
||||||
|
// if list is to the right of new interval, add to the new list
|
||||||
|
if interval.Offset+interval.Size <= list.Head.Offset {
|
||||||
|
newLists = append(newLists, list)
|
||||||
|
}
|
||||||
|
// if new interval overwrite the right part of the list
|
||||||
|
if list.Head.Offset < interval.Offset && interval.Offset < list.Tail.Offset+list.Tail.Size {
|
||||||
|
// create a new list of the left part of existing list
|
||||||
|
newLists = append(newLists, subList(list, list.Offset(), interval.Offset))
|
||||||
|
}
|
||||||
|
// if new interval overwrite the left part of the list
|
||||||
|
if list.Head.Offset < interval.Offset+interval.Size && interval.Offset+interval.Size < list.Tail.Offset+list.Tail.Size {
|
||||||
|
// create a new list of the right part of existing list
|
||||||
|
newLists = append(newLists, subList(list, interval.Offset+interval.Size, list.Tail.Offset+list.Tail.Size))
|
||||||
|
}
|
||||||
|
// skip anything that is fully overwritten by the new interval
|
||||||
|
}
|
||||||
|
|
||||||
|
c.lists = newLists
|
||||||
|
// add the new interval to the lists, connecting neighbor lists
|
||||||
|
var prevList, nextList *IntervalLinkedList
|
||||||
|
|
||||||
|
for _, list := range c.lists {
|
||||||
|
if list.Head.Offset == interval.Offset+interval.Size {
|
||||||
|
nextList = list
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, list := range c.lists {
|
||||||
|
if list.Head.Offset+list.Size() == offset {
|
||||||
|
list.addNodeToTail(interval)
|
||||||
|
prevList = list
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if prevList != nil && nextList != nil {
|
||||||
|
// glog.V(4).Infof("connecting [%d,%d) + [%d,%d) => [%d,%d)", prevList.Head.Offset, prevList.Tail.Offset+prevList.Tail.Size, nextList.Head.Offset, nextList.Tail.Offset+nextList.Tail.Size, prevList.Head.Offset, nextList.Tail.Offset+nextList.Tail.Size)
|
||||||
|
prevList.Tail.Next = nextList.Head
|
||||||
|
prevList.Tail = nextList.Tail
|
||||||
|
c.removeList(nextList)
|
||||||
|
} else if nextList != nil {
|
||||||
|
// add to head was not done when checking
|
||||||
|
nextList.addNodeToHead(interval)
|
||||||
|
}
|
||||||
|
if prevList == nil && nextList == nil {
|
||||||
|
c.lists = append(c.lists, &IntervalLinkedList{
|
||||||
|
Head: interval,
|
||||||
|
Tail: interval,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *ContinuousIntervals) RemoveLargestIntervalLinkedList() *IntervalLinkedList {
|
||||||
|
var maxSize int64
|
||||||
|
maxIndex := -1
|
||||||
|
for k, list := range c.lists {
|
||||||
|
if maxSize <= list.Size() {
|
||||||
|
maxSize = list.Size()
|
||||||
|
maxIndex = k
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if maxSize <= 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
t := c.lists[maxIndex]
|
||||||
|
c.lists = append(c.lists[0:maxIndex], c.lists[maxIndex+1:]...)
|
||||||
|
return t
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *ContinuousIntervals) removeList(target *IntervalLinkedList) {
|
||||||
|
index := -1
|
||||||
|
for k, list := range c.lists {
|
||||||
|
if list.Offset() == target.Offset() {
|
||||||
|
index = k
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if index < 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.lists = append(c.lists[0:index], c.lists[index+1:]...)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *ContinuousIntervals) ReadData(data []byte, startOffset int64) (offset int64, size int) {
|
||||||
|
var minOffset int64 = math.MaxInt64
|
||||||
|
var maxStop int64
|
||||||
|
for _, list := range c.lists {
|
||||||
|
start := max(startOffset, list.Offset())
|
||||||
|
stop := min(startOffset+int64(len(data)), list.Offset()+list.Size())
|
||||||
|
if start <= stop {
|
||||||
|
list.ReadData(data[start-startOffset:], start, stop)
|
||||||
|
minOffset = min(minOffset, start)
|
||||||
|
maxStop = max(maxStop, stop)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if minOffset == math.MaxInt64 {
|
||||||
|
return 0, 0
|
||||||
|
}
|
||||||
|
|
||||||
|
offset = minOffset
|
||||||
|
size = int(maxStop - offset)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *IntervalLinkedList) ToReader() io.Reader {
|
||||||
|
var readers []io.Reader
|
||||||
|
t := l.Head
|
||||||
|
readers = append(readers, bytes.NewReader(t.Data))
|
||||||
|
for t.Next != nil {
|
||||||
|
t = t.Next
|
||||||
|
readers = append(readers, bytes.NewReader(t.Data))
|
||||||
|
}
|
||||||
|
return io.MultiReader(readers...)
|
||||||
|
}
|
72
weed/filesys/dirty_page_interval_test.go
Normal file
72
weed/filesys/dirty_page_interval_test.go
Normal file
@ -0,0 +1,72 @@
|
|||||||
|
package filesys
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestContinuousIntervals_AddIntervalAppend(t *testing.T) {
|
||||||
|
|
||||||
|
c := &ContinuousIntervals{}
|
||||||
|
|
||||||
|
// 25, 25, 25
|
||||||
|
c.AddInterval(getBytes(25, 3), 0)
|
||||||
|
// _, _, 23, 23, 23, 23
|
||||||
|
c.AddInterval(getBytes(23, 4), 2)
|
||||||
|
|
||||||
|
expectedData(t, c, 0, 25, 25, 23, 23, 23, 23)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestContinuousIntervals_AddIntervalInnerOverwrite(t *testing.T) {
|
||||||
|
|
||||||
|
c := &ContinuousIntervals{}
|
||||||
|
|
||||||
|
// 25, 25, 25, 25, 25
|
||||||
|
c.AddInterval(getBytes(25, 5), 0)
|
||||||
|
// _, _, 23, 23
|
||||||
|
c.AddInterval(getBytes(23, 2), 2)
|
||||||
|
|
||||||
|
expectedData(t, c, 0, 25, 25, 23, 23, 25)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestContinuousIntervals_AddIntervalFullOverwrite(t *testing.T) {
|
||||||
|
|
||||||
|
c := &ContinuousIntervals{}
|
||||||
|
|
||||||
|
// 25,
|
||||||
|
c.AddInterval(getBytes(25, 1), 0)
|
||||||
|
// _, _, _, _, 23, 23
|
||||||
|
c.AddInterval(getBytes(23, 2), 4)
|
||||||
|
// _, _, _, 24, 24, 24, 24
|
||||||
|
c.AddInterval(getBytes(24, 4), 3)
|
||||||
|
|
||||||
|
// _, 22, 22
|
||||||
|
c.AddInterval(getBytes(22, 2), 1)
|
||||||
|
|
||||||
|
expectedData(t, c, 0, 25, 22, 22, 24, 24, 24, 24)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func expectedData(t *testing.T, c *ContinuousIntervals, offset int, data ...byte) {
|
||||||
|
start, stop := int64(offset), int64(offset+len(data))
|
||||||
|
for _, list := range c.lists {
|
||||||
|
nodeStart, nodeStop := max(start, list.Head.Offset), min(stop, list.Head.Offset+list.Size())
|
||||||
|
if nodeStart < nodeStop {
|
||||||
|
buf := make([]byte, nodeStop-nodeStart)
|
||||||
|
list.ReadData(buf, nodeStart, nodeStop)
|
||||||
|
if bytes.Compare(buf, data[nodeStart-start:nodeStop-start]) != 0 {
|
||||||
|
t.Errorf("expected %v actual %v", data[nodeStart-start:nodeStop-start], buf)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func getBytes(content byte, length int) []byte {
|
||||||
|
data := make([]byte, length)
|
||||||
|
for i := 0; i < length; i++ {
|
||||||
|
data[i] = content
|
||||||
|
}
|
||||||
|
return data
|
||||||
|
}
|
@ -3,7 +3,6 @@ package filesys
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
"sort"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@ -20,6 +19,11 @@ var _ = fs.Node(&File{})
|
|||||||
var _ = fs.NodeOpener(&File{})
|
var _ = fs.NodeOpener(&File{})
|
||||||
var _ = fs.NodeFsyncer(&File{})
|
var _ = fs.NodeFsyncer(&File{})
|
||||||
var _ = fs.NodeSetattrer(&File{})
|
var _ = fs.NodeSetattrer(&File{})
|
||||||
|
var _ = fs.NodeGetxattrer(&File{})
|
||||||
|
var _ = fs.NodeSetxattrer(&File{})
|
||||||
|
var _ = fs.NodeRemovexattrer(&File{})
|
||||||
|
var _ = fs.NodeListxattrer(&File{})
|
||||||
|
var _ = fs.NodeForgetter(&File{})
|
||||||
|
|
||||||
type File struct {
|
type File struct {
|
||||||
Name string
|
Name string
|
||||||
@ -27,21 +31,32 @@ type File struct {
|
|||||||
wfs *WFS
|
wfs *WFS
|
||||||
entry *filer_pb.Entry
|
entry *filer_pb.Entry
|
||||||
entryViewCache []filer2.VisibleInterval
|
entryViewCache []filer2.VisibleInterval
|
||||||
isOpen bool
|
isOpen int
|
||||||
}
|
}
|
||||||
|
|
||||||
func (file *File) fullpath() string {
|
func (file *File) fullpath() filer2.FullPath {
|
||||||
return filepath.Join(file.dir.Path, file.Name)
|
return filer2.NewFullPath(file.dir.Path, file.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (file *File) Attr(ctx context.Context, attr *fuse.Attr) error {
|
func (file *File) Attr(ctx context.Context, attr *fuse.Attr) error {
|
||||||
|
|
||||||
if err := file.maybeLoadAttributes(ctx); err != nil {
|
glog.V(4).Infof("file Attr %s, open:%v, existing attr: %+v", file.fullpath(), file.isOpen, attr)
|
||||||
return err
|
|
||||||
|
if file.isOpen <= 0 {
|
||||||
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
attr.Inode = file.fullpath().AsInode()
|
||||||
|
attr.Valid = time.Second
|
||||||
attr.Mode = os.FileMode(file.entry.Attributes.FileMode)
|
attr.Mode = os.FileMode(file.entry.Attributes.FileMode)
|
||||||
attr.Size = filer2.TotalSize(file.entry.Chunks)
|
attr.Size = filer2.TotalSize(file.entry.Chunks)
|
||||||
|
if file.isOpen > 0 {
|
||||||
|
attr.Size = file.entry.Attributes.FileSize
|
||||||
|
glog.V(4).Infof("file Attr %s, open:%v, size: %d", file.fullpath(), file.isOpen, attr.Size)
|
||||||
|
}
|
||||||
|
attr.Crtime = time.Unix(file.entry.Attributes.Crtime, 0)
|
||||||
attr.Mtime = time.Unix(file.entry.Attributes.Mtime, 0)
|
attr.Mtime = time.Unix(file.entry.Attributes.Mtime, 0)
|
||||||
attr.Gid = file.entry.Attributes.Gid
|
attr.Gid = file.entry.Attributes.Gid
|
||||||
attr.Uid = file.entry.Attributes.Uid
|
attr.Uid = file.entry.Attributes.Uid
|
||||||
@ -52,11 +67,22 @@ func (file *File) Attr(ctx context.Context, attr *fuse.Attr) error {
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (file *File) Getxattr(ctx context.Context, req *fuse.GetxattrRequest, resp *fuse.GetxattrResponse) error {
|
||||||
|
|
||||||
|
// glog.V(4).Infof("file Getxattr %s", file.fullpath())
|
||||||
|
|
||||||
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return getxattr(file.entry, req, resp)
|
||||||
|
}
|
||||||
|
|
||||||
func (file *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fs.Handle, error) {
|
func (file *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fs.Handle, error) {
|
||||||
|
|
||||||
glog.V(3).Infof("%v file open %+v", file.fullpath(), req)
|
glog.V(4).Infof("file %v open %+v", file.fullpath(), req)
|
||||||
|
|
||||||
file.isOpen = true
|
file.isOpen++
|
||||||
|
|
||||||
handle := file.wfs.AcquireHandle(file, req.Uid, req.Gid)
|
handle := file.wfs.AcquireHandle(file, req.Uid, req.Gid)
|
||||||
|
|
||||||
@ -70,17 +96,28 @@ func (file *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.Op
|
|||||||
|
|
||||||
func (file *File) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) error {
|
func (file *File) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) error {
|
||||||
|
|
||||||
if err := file.maybeLoadAttributes(ctx); err != nil {
|
glog.V(3).Infof("%v file setattr %+v, old:%+v", file.fullpath(), req, file.entry.Attributes)
|
||||||
|
|
||||||
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(3).Infof("%v file setattr %+v, old:%+v", file.fullpath(), req, file.entry.Attributes)
|
|
||||||
if req.Valid.Size() {
|
if req.Valid.Size() {
|
||||||
|
|
||||||
glog.V(3).Infof("%v file setattr set size=%v", file.fullpath(), req.Size)
|
glog.V(3).Infof("%v file setattr set size=%v", file.fullpath(), req.Size)
|
||||||
if req.Size == 0 {
|
if req.Size < filer2.TotalSize(file.entry.Chunks) {
|
||||||
// fmt.Printf("truncate %v \n", fullPath)
|
// fmt.Printf("truncate %v \n", fullPath)
|
||||||
file.entry.Chunks = nil
|
var chunks []*filer_pb.FileChunk
|
||||||
|
for _, chunk := range file.entry.Chunks {
|
||||||
|
int64Size := int64(chunk.Size)
|
||||||
|
if chunk.Offset+int64Size > int64(req.Size) {
|
||||||
|
int64Size = int64(req.Size) - chunk.Offset
|
||||||
|
}
|
||||||
|
if int64Size > 0 {
|
||||||
|
chunks = append(chunks, chunk)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
file.entry.Chunks = chunks
|
||||||
file.entryViewCache = nil
|
file.entryViewCache = nil
|
||||||
}
|
}
|
||||||
file.entry.Attributes.FileSize = req.Size
|
file.entry.Attributes.FileSize = req.Size
|
||||||
@ -105,26 +142,65 @@ func (file *File) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *f
|
|||||||
file.entry.Attributes.Mtime = req.Mtime.Unix()
|
file.entry.Attributes.Mtime = req.Mtime.Unix()
|
||||||
}
|
}
|
||||||
|
|
||||||
if file.isOpen {
|
if file.isOpen > 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
return file.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
file.wfs.cacheDelete(file.fullpath())
|
||||||
|
|
||||||
request := &filer_pb.UpdateEntryRequest{
|
return file.saveEntry(ctx)
|
||||||
Directory: file.dir.Path,
|
|
||||||
Entry: file.entry,
|
|
||||||
}
|
|
||||||
|
|
||||||
glog.V(1).Infof("set attr file entry: %v", request)
|
}
|
||||||
_, err := client.UpdateEntry(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
glog.V(0).Infof("UpdateEntry file %s/%s: %v", file.dir.Path, file.Name, err)
|
|
||||||
return fuse.EIO
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
func (file *File) Setxattr(ctx context.Context, req *fuse.SetxattrRequest) error {
|
||||||
})
|
|
||||||
|
glog.V(4).Infof("file Setxattr %s: %s", file.fullpath(), req.Name)
|
||||||
|
|
||||||
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := setxattr(file.entry, req); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
file.wfs.cacheDelete(file.fullpath())
|
||||||
|
|
||||||
|
return file.saveEntry(ctx)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (file *File) Removexattr(ctx context.Context, req *fuse.RemovexattrRequest) error {
|
||||||
|
|
||||||
|
glog.V(4).Infof("file Removexattr %s: %s", file.fullpath(), req.Name)
|
||||||
|
|
||||||
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := removexattr(file.entry, req); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
file.wfs.cacheDelete(file.fullpath())
|
||||||
|
|
||||||
|
return file.saveEntry(ctx)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (file *File) Listxattr(ctx context.Context, req *fuse.ListxattrRequest, resp *fuse.ListxattrResponse) error {
|
||||||
|
|
||||||
|
glog.V(4).Infof("file Listxattr %s", file.fullpath())
|
||||||
|
|
||||||
|
if err := file.maybeLoadEntry(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := listxattr(file.entry, req, resp); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -136,50 +212,26 @@ func (file *File) Fsync(ctx context.Context, req *fuse.FsyncRequest) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (file *File) maybeLoadAttributes(ctx context.Context) error {
|
func (file *File) Forget() {
|
||||||
if file.entry == nil || !file.isOpen {
|
glog.V(3).Infof("Forget file %s/%s", file.dir.Path, file.Name)
|
||||||
item := file.wfs.listDirectoryEntriesCache.Get(file.fullpath())
|
|
||||||
if item != nil && !item.Expired() {
|
file.wfs.forgetNode(filer2.NewFullPath(file.dir.Path, file.Name))
|
||||||
entry := item.Value().(*filer_pb.Entry)
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (file *File) maybeLoadEntry(ctx context.Context) error {
|
||||||
|
if file.entry == nil || file.isOpen <= 0 {
|
||||||
|
entry, err := file.wfs.maybeLoadEntry(ctx, file.dir.Path, file.Name)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if entry != nil {
|
||||||
file.setEntry(entry)
|
file.setEntry(entry)
|
||||||
// glog.V(1).Infof("file attr read cached %v attributes", file.Name)
|
|
||||||
} else {
|
|
||||||
err := file.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
|
||||||
|
|
||||||
request := &filer_pb.LookupDirectoryEntryRequest{
|
|
||||||
Name: file.Name,
|
|
||||||
Directory: file.dir.Path,
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err := client.LookupDirectoryEntry(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
glog.V(3).Infof("file attr read file %v: %v", request, err)
|
|
||||||
return fuse.ENOENT
|
|
||||||
}
|
|
||||||
|
|
||||||
file.setEntry(resp.Entry)
|
|
||||||
|
|
||||||
glog.V(3).Infof("file attr %v %+v: %d", file.fullpath(), file.entry.Attributes, filer2.TotalSize(file.entry.Chunks))
|
|
||||||
|
|
||||||
// file.wfs.listDirectoryEntriesCache.Set(file.fullpath(), file.entry, file.wfs.option.EntryCacheTtl)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (file *File) addChunk(chunk *filer_pb.FileChunk) {
|
|
||||||
if chunk != nil {
|
|
||||||
file.addChunks([]*filer_pb.FileChunk{chunk})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (file *File) addChunks(chunks []*filer_pb.FileChunk) {
|
func (file *File) addChunks(chunks []*filer_pb.FileChunk) {
|
||||||
|
|
||||||
sort.Slice(chunks, func(i, j int) bool {
|
sort.Slice(chunks, func(i, j int) bool {
|
||||||
@ -203,3 +255,22 @@ func (file *File) setEntry(entry *filer_pb.Entry) {
|
|||||||
file.entry = entry
|
file.entry = entry
|
||||||
file.entryViewCache = filer2.NonOverlappingVisibleIntervals(file.entry.Chunks)
|
file.entryViewCache = filer2.NonOverlappingVisibleIntervals(file.entry.Chunks)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (file *File) saveEntry(ctx context.Context) error {
|
||||||
|
return file.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
|
request := &filer_pb.UpdateEntryRequest{
|
||||||
|
Directory: file.dir.Path,
|
||||||
|
Entry: file.entry,
|
||||||
|
}
|
||||||
|
|
||||||
|
glog.V(1).Infof("save file entry: %v", request)
|
||||||
|
_, err := client.UpdateEntry(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
glog.V(0).Infof("UpdateEntry file %s/%s: %v", file.dir.Path, file.Name, err)
|
||||||
|
return fuse.EIO
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
@ -7,10 +7,11 @@ import (
|
|||||||
"path"
|
"path"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/gabriel-vasile/mimetype"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/filer2"
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
"github.com/gabriel-vasile/mimetype"
|
|
||||||
"github.com/seaweedfs/fuse"
|
"github.com/seaweedfs/fuse"
|
||||||
"github.com/seaweedfs/fuse/fs"
|
"github.com/seaweedfs/fuse/fs"
|
||||||
)
|
)
|
||||||
@ -50,60 +51,84 @@ func (fh *FileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fus
|
|||||||
|
|
||||||
glog.V(4).Infof("%s read fh %d: [%d,%d)", fh.f.fullpath(), fh.handle, req.Offset, req.Offset+int64(req.Size))
|
glog.V(4).Infof("%s read fh %d: [%d,%d)", fh.f.fullpath(), fh.handle, req.Offset, req.Offset+int64(req.Size))
|
||||||
|
|
||||||
// this value should come from the filer instead of the old f
|
|
||||||
if len(fh.f.entry.Chunks) == 0 {
|
|
||||||
glog.V(1).Infof("empty fh %v/%v", fh.f.dir.Path, fh.f.Name)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
buff := make([]byte, req.Size)
|
buff := make([]byte, req.Size)
|
||||||
|
|
||||||
if fh.f.entryViewCache == nil {
|
totalRead, err := fh.readFromChunks(ctx, buff, req.Offset)
|
||||||
fh.f.entryViewCache = filer2.NonOverlappingVisibleIntervals(fh.f.entry.Chunks)
|
if err == nil {
|
||||||
|
dirtyOffset, dirtySize := fh.readFromDirtyPages(ctx, buff, req.Offset)
|
||||||
|
if totalRead+req.Offset < dirtyOffset+int64(dirtySize) {
|
||||||
|
totalRead = dirtyOffset + int64(dirtySize) - req.Offset
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
chunkViews := filer2.ViewFromVisibleIntervals(fh.f.entryViewCache, req.Offset, req.Size)
|
|
||||||
|
|
||||||
totalRead, err := filer2.ReadIntoBuffer(ctx, fh.f.wfs, fh.f.fullpath(), buff, chunkViews, req.Offset)
|
|
||||||
|
|
||||||
resp.Data = buff[:totalRead]
|
resp.Data = buff[:totalRead]
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Errorf("file handle read %s: %v", fh.f.fullpath(), err)
|
glog.Errorf("file handle read %s: %v", fh.f.fullpath(), err)
|
||||||
|
return fuse.EIO
|
||||||
}
|
}
|
||||||
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (fh *FileHandle) readFromDirtyPages(ctx context.Context, buff []byte, startOffset int64) (offset int64, size int) {
|
||||||
|
return fh.dirtyPages.ReadDirtyData(ctx, buff, startOffset)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (fh *FileHandle) readFromChunks(ctx context.Context, buff []byte, offset int64) (int64, error) {
|
||||||
|
|
||||||
|
// this value should come from the filer instead of the old f
|
||||||
|
if len(fh.f.entry.Chunks) == 0 {
|
||||||
|
glog.V(1).Infof("empty fh %v", fh.f.fullpath())
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if fh.f.entryViewCache == nil {
|
||||||
|
fh.f.entryViewCache = filer2.NonOverlappingVisibleIntervals(fh.f.entry.Chunks)
|
||||||
|
}
|
||||||
|
|
||||||
|
chunkViews := filer2.ViewFromVisibleIntervals(fh.f.entryViewCache, offset, len(buff))
|
||||||
|
|
||||||
|
totalRead, err := filer2.ReadIntoBuffer(ctx, fh.f.wfs, fh.f.fullpath(), buff, chunkViews, offset)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
glog.Errorf("file handle read %s: %v", fh.f.fullpath(), err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return totalRead, err
|
||||||
|
}
|
||||||
|
|
||||||
// Write to the file handle
|
// Write to the file handle
|
||||||
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
|
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
|
||||||
|
|
||||||
// write the request to volume servers
|
// write the request to volume servers
|
||||||
|
|
||||||
glog.V(4).Infof("%+v/%v write fh %d: [%d,%d)", fh.f.dir.Path, fh.f.Name, fh.handle, req.Offset, req.Offset+int64(len(req.Data)))
|
fh.f.entry.Attributes.FileSize = uint64(max(req.Offset+int64(len(req.Data)), int64(fh.f.entry.Attributes.FileSize)))
|
||||||
|
// glog.V(0).Infof("%v write [%d,%d)", fh.f.fullpath(), req.Offset, req.Offset+int64(len(req.Data)))
|
||||||
|
|
||||||
chunks, err := fh.dirtyPages.AddPage(ctx, req.Offset, req.Data)
|
chunks, err := fh.dirtyPages.AddPage(ctx, req.Offset, req.Data)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Errorf("%+v/%v write fh %d: [%d,%d): %v", fh.f.dir.Path, fh.f.Name, fh.handle, req.Offset, req.Offset+int64(len(req.Data)), err)
|
glog.Errorf("%v write fh %d: [%d,%d): %v", fh.f.fullpath(), fh.handle, req.Offset, req.Offset+int64(len(req.Data)), err)
|
||||||
return fmt.Errorf("write %s/%s at [%d,%d): %v", fh.f.dir.Path, fh.f.Name, req.Offset, req.Offset+int64(len(req.Data)), err)
|
return fuse.EIO
|
||||||
}
|
}
|
||||||
|
|
||||||
resp.Size = len(req.Data)
|
resp.Size = len(req.Data)
|
||||||
|
|
||||||
if req.Offset == 0 {
|
if req.Offset == 0 {
|
||||||
// detect mime type
|
// detect mime type
|
||||||
var possibleExt string
|
detectedMIME := mimetype.Detect(req.Data)
|
||||||
fh.contentType, possibleExt = mimetype.Detect(req.Data)
|
fh.contentType = detectedMIME.String()
|
||||||
if ext := path.Ext(fh.f.Name); ext != possibleExt {
|
if ext := path.Ext(fh.f.Name); ext != detectedMIME.Extension() {
|
||||||
fh.contentType = mime.TypeByExtension(ext)
|
fh.contentType = mime.TypeByExtension(ext)
|
||||||
}
|
}
|
||||||
|
|
||||||
fh.dirtyMetadata = true
|
fh.dirtyMetadata = true
|
||||||
}
|
}
|
||||||
|
|
||||||
fh.f.addChunks(chunks)
|
|
||||||
|
|
||||||
if len(chunks) > 0 {
|
if len(chunks) > 0 {
|
||||||
|
|
||||||
|
fh.f.addChunks(chunks)
|
||||||
|
|
||||||
fh.dirtyMetadata = true
|
fh.dirtyMetadata = true
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -114,11 +139,12 @@ func (fh *FileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) err
|
|||||||
|
|
||||||
glog.V(4).Infof("%v release fh %d", fh.f.fullpath(), fh.handle)
|
glog.V(4).Infof("%v release fh %d", fh.f.fullpath(), fh.handle)
|
||||||
|
|
||||||
fh.dirtyPages.releaseResource()
|
fh.f.isOpen--
|
||||||
|
|
||||||
fh.f.wfs.ReleaseHandle(fh.f.fullpath(), fuse.HandleID(fh.handle))
|
if fh.f.isOpen <= 0 {
|
||||||
|
fh.dirtyPages.releaseResource()
|
||||||
fh.f.isOpen = false
|
fh.f.wfs.ReleaseHandle(fh.f.fullpath(), fuse.HandleID(fh.handle))
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -128,19 +154,22 @@ func (fh *FileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
|
|||||||
// send the data to the OS
|
// send the data to the OS
|
||||||
glog.V(4).Infof("%s fh %d flush %v", fh.f.fullpath(), fh.handle, req)
|
glog.V(4).Infof("%s fh %d flush %v", fh.f.fullpath(), fh.handle, req)
|
||||||
|
|
||||||
chunk, err := fh.dirtyPages.FlushToStorage(ctx)
|
chunks, err := fh.dirtyPages.FlushToStorage(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Errorf("flush %s/%s: %v", fh.f.dir.Path, fh.f.Name, err)
|
glog.Errorf("flush %s: %v", fh.f.fullpath(), err)
|
||||||
return fmt.Errorf("flush %s/%s: %v", fh.f.dir.Path, fh.f.Name, err)
|
return fuse.EIO
|
||||||
}
|
}
|
||||||
|
|
||||||
fh.f.addChunk(chunk)
|
fh.f.addChunks(chunks)
|
||||||
|
if len(chunks) > 0 {
|
||||||
|
fh.dirtyMetadata = true
|
||||||
|
}
|
||||||
|
|
||||||
if !fh.dirtyMetadata {
|
if !fh.dirtyMetadata {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
return fh.f.wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err = fh.f.wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
if fh.f.entry.Attributes != nil {
|
if fh.f.entry.Attributes != nil {
|
||||||
fh.f.entry.Attributes.Mime = fh.contentType
|
fh.f.entry.Attributes.Mime = fh.contentType
|
||||||
@ -156,25 +185,36 @@ func (fh *FileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
|
|||||||
Entry: fh.f.entry,
|
Entry: fh.f.entry,
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(3).Infof("%s/%s set chunks: %v", fh.f.dir.Path, fh.f.Name, len(fh.f.entry.Chunks))
|
glog.V(3).Infof("%s set chunks: %v", fh.f.fullpath(), len(fh.f.entry.Chunks))
|
||||||
for i, chunk := range fh.f.entry.Chunks {
|
for i, chunk := range fh.f.entry.Chunks {
|
||||||
glog.V(3).Infof("%s/%s chunks %d: %v [%d,%d)", fh.f.dir.Path, fh.f.Name, i, chunk.FileId, chunk.Offset, chunk.Offset+int64(chunk.Size))
|
glog.V(3).Infof("%s chunks %d: %v [%d,%d)", fh.f.fullpath(), i, chunk.FileId, chunk.Offset, chunk.Offset+int64(chunk.Size))
|
||||||
}
|
}
|
||||||
|
|
||||||
chunks, garbages := filer2.CompactFileChunks(fh.f.entry.Chunks)
|
chunks, garbages := filer2.CompactFileChunks(fh.f.entry.Chunks)
|
||||||
fh.f.entry.Chunks = chunks
|
fh.f.entry.Chunks = chunks
|
||||||
// fh.f.entryViewCache = nil
|
// fh.f.entryViewCache = nil
|
||||||
|
|
||||||
if _, err := client.CreateEntry(ctx, request); err != nil {
|
if err := filer_pb.CreateEntry(ctx, client, request); err != nil {
|
||||||
glog.Errorf("update fh: %v", err)
|
glog.Errorf("update fh: %v", err)
|
||||||
return fmt.Errorf("update fh: %v", err)
|
return fmt.Errorf("update fh: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
fh.f.wfs.deleteFileChunks(ctx, garbages)
|
fh.f.wfs.deleteFileChunks(ctx, garbages)
|
||||||
for i, chunk := range garbages {
|
for i, chunk := range garbages {
|
||||||
glog.V(3).Infof("garbage %s/%s chunks %d: %v [%d,%d)", fh.f.dir.Path, fh.f.Name, i, chunk.FileId, chunk.Offset, chunk.Offset+int64(chunk.Size))
|
glog.V(3).Infof("garbage %s chunks %d: %v [%d,%d)", fh.f.fullpath(), i, chunk.FileId, chunk.Offset, chunk.Offset+int64(chunk.Size))
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
fh.dirtyMetadata = false
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
glog.Errorf("%v fh %d flush: %v", fh.f.fullpath(), fh.handle, err)
|
||||||
|
return fuse.EIO
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -5,16 +5,19 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"math"
|
"math"
|
||||||
"os"
|
"os"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/karlseguin/ccache"
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
"github.com/karlseguin/ccache"
|
|
||||||
"github.com/seaweedfs/fuse"
|
"github.com/seaweedfs/fuse"
|
||||||
"github.com/seaweedfs/fuse/fs"
|
"github.com/seaweedfs/fuse/fs"
|
||||||
"google.golang.org/grpc"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type Option struct {
|
type Option struct {
|
||||||
@ -26,7 +29,7 @@ type Option struct {
|
|||||||
TtlSec int32
|
TtlSec int32
|
||||||
ChunkSizeLimit int64
|
ChunkSizeLimit int64
|
||||||
DataCenter string
|
DataCenter string
|
||||||
DirListingLimit int
|
DirListCacheLimit int64
|
||||||
EntryCacheTtl time.Duration
|
EntryCacheTtl time.Duration
|
||||||
Umask os.FileMode
|
Umask os.FileMode
|
||||||
|
|
||||||
@ -44,13 +47,19 @@ type WFS struct {
|
|||||||
option *Option
|
option *Option
|
||||||
listDirectoryEntriesCache *ccache.Cache
|
listDirectoryEntriesCache *ccache.Cache
|
||||||
|
|
||||||
// contains all open handles
|
// contains all open handles, protected by handlesLock
|
||||||
|
handlesLock sync.Mutex
|
||||||
handles []*FileHandle
|
handles []*FileHandle
|
||||||
pathToHandleIndex map[string]int
|
pathToHandleIndex map[filer2.FullPath]int
|
||||||
pathToHandleLock sync.Mutex
|
|
||||||
bufPool sync.Pool
|
bufPool sync.Pool
|
||||||
|
|
||||||
stats statsCache
|
stats statsCache
|
||||||
|
|
||||||
|
// nodes, protected by nodesLock
|
||||||
|
nodesLock sync.Mutex
|
||||||
|
nodes map[uint64]fs.Node
|
||||||
|
root fs.Node
|
||||||
}
|
}
|
||||||
type statsCache struct {
|
type statsCache struct {
|
||||||
filer_pb.StatisticsResponse
|
filer_pb.StatisticsResponse
|
||||||
@ -60,36 +69,53 @@ type statsCache struct {
|
|||||||
func NewSeaweedFileSystem(option *Option) *WFS {
|
func NewSeaweedFileSystem(option *Option) *WFS {
|
||||||
wfs := &WFS{
|
wfs := &WFS{
|
||||||
option: option,
|
option: option,
|
||||||
listDirectoryEntriesCache: ccache.New(ccache.Configure().MaxSize(1024 * 8).ItemsToPrune(100)),
|
listDirectoryEntriesCache: ccache.New(ccache.Configure().MaxSize(option.DirListCacheLimit * 3).ItemsToPrune(100)),
|
||||||
pathToHandleIndex: make(map[string]int),
|
pathToHandleIndex: make(map[filer2.FullPath]int),
|
||||||
bufPool: sync.Pool{
|
bufPool: sync.Pool{
|
||||||
New: func() interface{} {
|
New: func() interface{} {
|
||||||
return make([]byte, option.ChunkSizeLimit)
|
return make([]byte, option.ChunkSizeLimit)
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
nodes: make(map[uint64]fs.Node),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
wfs.root = &Dir{Path: wfs.option.FilerMountRootPath, wfs: wfs}
|
||||||
|
|
||||||
return wfs
|
return wfs
|
||||||
}
|
}
|
||||||
|
|
||||||
func (wfs *WFS) Root() (fs.Node, error) {
|
func (wfs *WFS) Root() (fs.Node, error) {
|
||||||
return &Dir{Path: wfs.option.FilerMountRootPath, wfs: wfs}, nil
|
return wfs.root, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (wfs *WFS) WithFilerClient(ctx context.Context, fn func(filer_pb.SeaweedFilerClient) error) error {
|
func (wfs *WFS) WithFilerClient(ctx context.Context, fn func(context.Context, filer_pb.SeaweedFilerClient) error) error {
|
||||||
|
|
||||||
return util.WithCachedGrpcClient(ctx, func(grpcConnection *grpc.ClientConn) error {
|
err := util.WithCachedGrpcClient(ctx, func(ctx2 context.Context, grpcConnection *grpc.ClientConn) error {
|
||||||
client := filer_pb.NewSeaweedFilerClient(grpcConnection)
|
client := filer_pb.NewSeaweedFilerClient(grpcConnection)
|
||||||
return fn(client)
|
return fn(ctx2, client)
|
||||||
}, wfs.option.FilerGrpcAddress, wfs.option.GrpcDialOption)
|
}, wfs.option.FilerGrpcAddress, wfs.option.GrpcDialOption)
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if strings.Contains(err.Error(), "context canceled") {
|
||||||
|
glog.V(2).Infoln("retry context canceled request...")
|
||||||
|
return util.WithCachedGrpcClient(context.Background(), func(ctx2 context.Context, grpcConnection *grpc.ClientConn) error {
|
||||||
|
client := filer_pb.NewSeaweedFilerClient(grpcConnection)
|
||||||
|
return fn(ctx2, client)
|
||||||
|
}, wfs.option.FilerGrpcAddress, wfs.option.GrpcDialOption)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (wfs *WFS) AcquireHandle(file *File, uid, gid uint32) (fileHandle *FileHandle) {
|
func (wfs *WFS) AcquireHandle(file *File, uid, gid uint32) (fileHandle *FileHandle) {
|
||||||
wfs.pathToHandleLock.Lock()
|
|
||||||
defer wfs.pathToHandleLock.Unlock()
|
|
||||||
|
|
||||||
fullpath := file.fullpath()
|
fullpath := file.fullpath()
|
||||||
|
glog.V(4).Infof("%s AcquireHandle uid=%d gid=%d", fullpath, uid, gid)
|
||||||
|
|
||||||
|
wfs.handlesLock.Lock()
|
||||||
|
defer wfs.handlesLock.Unlock()
|
||||||
|
|
||||||
index, found := wfs.pathToHandleIndex[fullpath]
|
index, found := wfs.pathToHandleIndex[fullpath]
|
||||||
if found && wfs.handles[index] != nil {
|
if found && wfs.handles[index] != nil {
|
||||||
@ -103,24 +129,24 @@ func (wfs *WFS) AcquireHandle(file *File, uid, gid uint32) (fileHandle *FileHand
|
|||||||
wfs.handles[i] = fileHandle
|
wfs.handles[i] = fileHandle
|
||||||
fileHandle.handle = uint64(i)
|
fileHandle.handle = uint64(i)
|
||||||
wfs.pathToHandleIndex[fullpath] = i
|
wfs.pathToHandleIndex[fullpath] = i
|
||||||
glog.V(4).Infoln(fullpath, "reuse fileHandle id", fileHandle.handle)
|
glog.V(4).Infof("%s reuse fh %d", fullpath, fileHandle.handle)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
wfs.handles = append(wfs.handles, fileHandle)
|
wfs.handles = append(wfs.handles, fileHandle)
|
||||||
fileHandle.handle = uint64(len(wfs.handles) - 1)
|
fileHandle.handle = uint64(len(wfs.handles) - 1)
|
||||||
glog.V(2).Infoln(fullpath, "new fileHandle id", fileHandle.handle)
|
|
||||||
wfs.pathToHandleIndex[fullpath] = int(fileHandle.handle)
|
wfs.pathToHandleIndex[fullpath] = int(fileHandle.handle)
|
||||||
|
glog.V(4).Infof("%s new fh %d", fullpath, fileHandle.handle)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func (wfs *WFS) ReleaseHandle(fullpath string, handleId fuse.HandleID) {
|
func (wfs *WFS) ReleaseHandle(fullpath filer2.FullPath, handleId fuse.HandleID) {
|
||||||
wfs.pathToHandleLock.Lock()
|
wfs.handlesLock.Lock()
|
||||||
defer wfs.pathToHandleLock.Unlock()
|
defer wfs.handlesLock.Unlock()
|
||||||
|
|
||||||
glog.V(4).Infof("%s releasing handle id %d current handles length %d", fullpath, handleId, len(wfs.handles))
|
glog.V(4).Infof("%s ReleaseHandle id %d current handles length %d", fullpath, handleId, len(wfs.handles))
|
||||||
delete(wfs.pathToHandleIndex, fullpath)
|
delete(wfs.pathToHandleIndex, fullpath)
|
||||||
if int(handleId) < len(wfs.handles) {
|
if int(handleId) < len(wfs.handles) {
|
||||||
wfs.handles[int(handleId)] = nil
|
wfs.handles[int(handleId)] = nil
|
||||||
@ -136,7 +162,7 @@ func (wfs *WFS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.
|
|||||||
|
|
||||||
if wfs.stats.lastChecked < time.Now().Unix()-20 {
|
if wfs.stats.lastChecked < time.Now().Unix()-20 {
|
||||||
|
|
||||||
err := wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
err := wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
request := &filer_pb.StatisticsRequest{
|
request := &filer_pb.StatisticsRequest{
|
||||||
Collection: wfs.option.Collection,
|
Collection: wfs.option.Collection,
|
||||||
@ -190,3 +216,44 @@ func (wfs *WFS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (wfs *WFS) cacheGet(path filer2.FullPath) *filer_pb.Entry {
|
||||||
|
item := wfs.listDirectoryEntriesCache.Get(string(path))
|
||||||
|
if item != nil && !item.Expired() {
|
||||||
|
return item.Value().(*filer_pb.Entry)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
func (wfs *WFS) cacheSet(path filer2.FullPath, entry *filer_pb.Entry, ttl time.Duration) {
|
||||||
|
if entry == nil {
|
||||||
|
wfs.listDirectoryEntriesCache.Delete(string(path))
|
||||||
|
} else {
|
||||||
|
wfs.listDirectoryEntriesCache.Set(string(path), entry, ttl)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func (wfs *WFS) cacheDelete(path filer2.FullPath) {
|
||||||
|
wfs.listDirectoryEntriesCache.Delete(string(path))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (wfs *WFS) getNode(fullpath filer2.FullPath, fn func() fs.Node) fs.Node {
|
||||||
|
wfs.nodesLock.Lock()
|
||||||
|
defer wfs.nodesLock.Unlock()
|
||||||
|
|
||||||
|
node, found := wfs.nodes[fullpath.AsInode()]
|
||||||
|
if found {
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
node = fn()
|
||||||
|
if node != nil {
|
||||||
|
wfs.nodes[fullpath.AsInode()] = node
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
func (wfs *WFS) forgetNode(fullpath filer2.FullPath) {
|
||||||
|
wfs.nodesLock.Lock()
|
||||||
|
defer wfs.nodesLock.Unlock()
|
||||||
|
|
||||||
|
delete(wfs.nodes, fullpath.AsInode())
|
||||||
|
|
||||||
|
}
|
||||||
|
@ -20,7 +20,7 @@ func (wfs *WFS) deleteFileChunks(ctx context.Context, chunks []*filer_pb.FileChu
|
|||||||
fileIds = append(fileIds, chunk.GetFileIdString())
|
fileIds = append(fileIds, chunk.GetFileIdString())
|
||||||
}
|
}
|
||||||
|
|
||||||
wfs.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error {
|
wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
deleteFileIds(ctx, wfs.option.GrpcDialOption, client, fileIds)
|
deleteFileIds(ctx, wfs.option.GrpcDialOption, client, fileIds)
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
@ -50,7 +50,10 @@ func deleteFileIds(ctx context.Context, grpcDialOption grpc.DialOption, client f
|
|||||||
VolumeId: vid,
|
VolumeId: vid,
|
||||||
Locations: nil,
|
Locations: nil,
|
||||||
}
|
}
|
||||||
locations := resp.LocationsMap[vid]
|
locations, found := resp.LocationsMap[vid]
|
||||||
|
if !found {
|
||||||
|
continue
|
||||||
|
}
|
||||||
for _, loc := range locations.Locations {
|
for _, loc := range locations.Locations {
|
||||||
lr.Locations = append(lr.Locations, operation.Location{
|
lr.Locations = append(lr.Locations, operation.Location{
|
||||||
Url: loc.Url,
|
Url: loc.Url,
|
||||||
|
144
weed/filesys/xattr.go
Normal file
144
weed/filesys/xattr.go
Normal file
@ -0,0 +1,144 @@
|
|||||||
|
package filesys
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/filer2"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
|
||||||
|
"github.com/seaweedfs/fuse"
|
||||||
|
)
|
||||||
|
|
||||||
|
func getxattr(entry *filer_pb.Entry, req *fuse.GetxattrRequest, resp *fuse.GetxattrResponse) error {
|
||||||
|
|
||||||
|
if entry == nil {
|
||||||
|
return fuse.ErrNoXattr
|
||||||
|
}
|
||||||
|
if entry.Extended == nil {
|
||||||
|
return fuse.ErrNoXattr
|
||||||
|
}
|
||||||
|
data, found := entry.Extended[req.Name]
|
||||||
|
if !found {
|
||||||
|
return fuse.ErrNoXattr
|
||||||
|
}
|
||||||
|
if req.Position < uint32(len(data)) {
|
||||||
|
size := req.Size
|
||||||
|
if req.Position+size >= uint32(len(data)) {
|
||||||
|
size = uint32(len(data)) - req.Position
|
||||||
|
}
|
||||||
|
if size == 0 {
|
||||||
|
resp.Xattr = data[req.Position:]
|
||||||
|
} else {
|
||||||
|
resp.Xattr = data[req.Position : req.Position+size]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func setxattr(entry *filer_pb.Entry, req *fuse.SetxattrRequest) error {
|
||||||
|
|
||||||
|
if entry == nil {
|
||||||
|
return fuse.EIO
|
||||||
|
}
|
||||||
|
|
||||||
|
if entry.Extended == nil {
|
||||||
|
entry.Extended = make(map[string][]byte)
|
||||||
|
}
|
||||||
|
data, _ := entry.Extended[req.Name]
|
||||||
|
|
||||||
|
newData := make([]byte, int(req.Position)+len(req.Xattr))
|
||||||
|
|
||||||
|
copy(newData, data)
|
||||||
|
|
||||||
|
copy(newData[int(req.Position):], req.Xattr)
|
||||||
|
|
||||||
|
entry.Extended[req.Name] = newData
|
||||||
|
|
||||||
|
return nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func removexattr(entry *filer_pb.Entry, req *fuse.RemovexattrRequest) error {
|
||||||
|
|
||||||
|
if entry == nil {
|
||||||
|
return fuse.ErrNoXattr
|
||||||
|
}
|
||||||
|
|
||||||
|
if entry.Extended == nil {
|
||||||
|
return fuse.ErrNoXattr
|
||||||
|
}
|
||||||
|
|
||||||
|
_, found := entry.Extended[req.Name]
|
||||||
|
|
||||||
|
if !found {
|
||||||
|
return fuse.ErrNoXattr
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(entry.Extended, req.Name)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func listxattr(entry *filer_pb.Entry, req *fuse.ListxattrRequest, resp *fuse.ListxattrResponse) error {
|
||||||
|
|
||||||
|
if entry == nil {
|
||||||
|
return fuse.EIO
|
||||||
|
}
|
||||||
|
|
||||||
|
for k := range entry.Extended {
|
||||||
|
resp.Append(k)
|
||||||
|
}
|
||||||
|
|
||||||
|
size := req.Size
|
||||||
|
if req.Position+size >= uint32(len(resp.Xattr)) {
|
||||||
|
size = uint32(len(resp.Xattr)) - req.Position
|
||||||
|
}
|
||||||
|
|
||||||
|
if size == 0 {
|
||||||
|
resp.Xattr = resp.Xattr[req.Position:]
|
||||||
|
} else {
|
||||||
|
resp.Xattr = resp.Xattr[req.Position : req.Position+size]
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (wfs *WFS) maybeLoadEntry(ctx context.Context, dir, name string) (entry *filer_pb.Entry, err error) {
|
||||||
|
|
||||||
|
fullpath := filer2.NewFullPath(dir, name)
|
||||||
|
entry = wfs.cacheGet(fullpath)
|
||||||
|
if entry != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// glog.V(3).Infof("read entry cache miss %s", fullpath)
|
||||||
|
|
||||||
|
err = wfs.WithFilerClient(ctx, func(ctx context.Context, client filer_pb.SeaweedFilerClient) error {
|
||||||
|
|
||||||
|
request := &filer_pb.LookupDirectoryEntryRequest{
|
||||||
|
Name: name,
|
||||||
|
Directory: dir,
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := client.LookupDirectoryEntry(ctx, request)
|
||||||
|
if err != nil || resp == nil || resp.Entry == nil {
|
||||||
|
if err == filer2.ErrNotFound || strings.Contains(err.Error(), filer2.ErrNotFound.Error()) {
|
||||||
|
glog.V(3).Infof("file attr read not found file %v: %v", request, err)
|
||||||
|
return fuse.ENOENT
|
||||||
|
}
|
||||||
|
glog.V(3).Infof("attr read %v: %v", request, err)
|
||||||
|
return fuse.EIO
|
||||||
|
}
|
||||||
|
|
||||||
|
entry = resp.Entry
|
||||||
|
wfs.cacheSet(fullpath, entry, wfs.option.EntryCacheTtl)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
@ -27,24 +27,24 @@ func (k *AwsSqsPub) GetName() string {
|
|||||||
return "aws_sqs"
|
return "aws_sqs"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *AwsSqsPub) Initialize(configuration util.Configuration) (err error) {
|
func (k *AwsSqsPub) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
glog.V(0).Infof("filer.notification.aws_sqs.region: %v", configuration.GetString("region"))
|
glog.V(0).Infof("filer.notification.aws_sqs.region: %v", configuration.GetString(prefix+"region"))
|
||||||
glog.V(0).Infof("filer.notification.aws_sqs.sqs_queue_name: %v", configuration.GetString("sqs_queue_name"))
|
glog.V(0).Infof("filer.notification.aws_sqs.sqs_queue_name: %v", configuration.GetString(prefix+"sqs_queue_name"))
|
||||||
return k.initialize(
|
return k.initialize(
|
||||||
configuration.GetString("aws_access_key_id"),
|
configuration.GetString(prefix+"aws_access_key_id"),
|
||||||
configuration.GetString("aws_secret_access_key"),
|
configuration.GetString(prefix+"aws_secret_access_key"),
|
||||||
configuration.GetString("region"),
|
configuration.GetString(prefix+"region"),
|
||||||
configuration.GetString("sqs_queue_name"),
|
configuration.GetString(prefix+"sqs_queue_name"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *AwsSqsPub) initialize(awsAccessKeyId, aswSecretAccessKey, region, queueName string) (err error) {
|
func (k *AwsSqsPub) initialize(awsAccessKeyId, awsSecretAccessKey, region, queueName string) (err error) {
|
||||||
|
|
||||||
config := &aws.Config{
|
config := &aws.Config{
|
||||||
Region: aws.String(region),
|
Region: aws.String(region),
|
||||||
}
|
}
|
||||||
if awsAccessKeyId != "" && aswSecretAccessKey != "" {
|
if awsAccessKeyId != "" && awsSecretAccessKey != "" {
|
||||||
config.Credentials = credentials.NewStaticCredentials(awsAccessKeyId, aswSecretAccessKey, "")
|
config.Credentials = credentials.NewStaticCredentials(awsAccessKeyId, awsSecretAccessKey, "")
|
||||||
}
|
}
|
||||||
|
|
||||||
sess, err := session.NewSession(config)
|
sess, err := session.NewSession(config)
|
||||||
|
@ -11,7 +11,7 @@ type MessageQueue interface {
|
|||||||
// GetName gets the name to locate the configuration in filer.toml file
|
// GetName gets the name to locate the configuration in filer.toml file
|
||||||
GetName() string
|
GetName() string
|
||||||
// Initialize initializes the file store
|
// Initialize initializes the file store
|
||||||
Initialize(configuration util.Configuration) error
|
Initialize(configuration util.Configuration, prefix string) error
|
||||||
SendMessage(key string, message proto.Message) error
|
SendMessage(key string, message proto.Message) error
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -21,7 +21,7 @@ var (
|
|||||||
Queue MessageQueue
|
Queue MessageQueue
|
||||||
)
|
)
|
||||||
|
|
||||||
func LoadConfiguration(config *viper.Viper) {
|
func LoadConfiguration(config *viper.Viper, prefix string) {
|
||||||
|
|
||||||
if config == nil {
|
if config == nil {
|
||||||
return
|
return
|
||||||
@ -30,9 +30,8 @@ func LoadConfiguration(config *viper.Viper) {
|
|||||||
validateOneEnabledQueue(config)
|
validateOneEnabledQueue(config)
|
||||||
|
|
||||||
for _, queue := range MessageQueues {
|
for _, queue := range MessageQueues {
|
||||||
if config.GetBool(queue.GetName() + ".enabled") {
|
if config.GetBool(prefix + queue.GetName() + ".enabled") {
|
||||||
viperSub := config.Sub(queue.GetName())
|
if err := queue.Initialize(config, prefix+queue.GetName()+"."); err != nil {
|
||||||
if err := queue.Initialize(viperSub); err != nil {
|
|
||||||
glog.Fatalf("Failed to initialize notification for %s: %+v",
|
glog.Fatalf("Failed to initialize notification for %s: %+v",
|
||||||
queue.GetName(), err)
|
queue.GetName(), err)
|
||||||
}
|
}
|
||||||
|
@ -18,12 +18,13 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/notification"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/util"
|
|
||||||
"github.com/golang/protobuf/proto"
|
"github.com/golang/protobuf/proto"
|
||||||
"gocloud.dev/pubsub"
|
"gocloud.dev/pubsub"
|
||||||
_ "gocloud.dev/pubsub/awssnssqs"
|
_ "gocloud.dev/pubsub/awssnssqs"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/notification"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/util"
|
||||||
// _ "gocloud.dev/pubsub/azuresb"
|
// _ "gocloud.dev/pubsub/azuresb"
|
||||||
_ "gocloud.dev/pubsub/gcppubsub"
|
_ "gocloud.dev/pubsub/gcppubsub"
|
||||||
_ "gocloud.dev/pubsub/natspubsub"
|
_ "gocloud.dev/pubsub/natspubsub"
|
||||||
@ -43,8 +44,8 @@ func (k *GoCDKPubSub) GetName() string {
|
|||||||
return "gocdk_pub_sub"
|
return "gocdk_pub_sub"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *GoCDKPubSub) Initialize(config util.Configuration) error {
|
func (k *GoCDKPubSub) Initialize(configuration util.Configuration, prefix string) error {
|
||||||
k.topicURL = config.GetString("topic_url")
|
k.topicURL = configuration.GetString(prefix + "topic_url")
|
||||||
glog.V(0).Infof("notification.gocdk_pub_sub.topic_url: %v", k.topicURL)
|
glog.V(0).Infof("notification.gocdk_pub_sub.topic_url: %v", k.topicURL)
|
||||||
topic, err := pubsub.OpenTopic(context.Background(), k.topicURL)
|
topic, err := pubsub.OpenTopic(context.Background(), k.topicURL)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -25,13 +25,13 @@ func (k *GooglePubSub) GetName() string {
|
|||||||
return "google_pub_sub"
|
return "google_pub_sub"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *GooglePubSub) Initialize(configuration util.Configuration) (err error) {
|
func (k *GooglePubSub) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
glog.V(0).Infof("notification.google_pub_sub.project_id: %v", configuration.GetString("project_id"))
|
glog.V(0).Infof("notification.google_pub_sub.project_id: %v", configuration.GetString(prefix+"project_id"))
|
||||||
glog.V(0).Infof("notification.google_pub_sub.topic: %v", configuration.GetString("topic"))
|
glog.V(0).Infof("notification.google_pub_sub.topic: %v", configuration.GetString(prefix+"topic"))
|
||||||
return k.initialize(
|
return k.initialize(
|
||||||
configuration.GetString("google_application_credentials"),
|
configuration.GetString(prefix+"google_application_credentials"),
|
||||||
configuration.GetString("project_id"),
|
configuration.GetString(prefix+"project_id"),
|
||||||
configuration.GetString("topic"),
|
configuration.GetString(prefix+"topic"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -21,12 +21,12 @@ func (k *KafkaQueue) GetName() string {
|
|||||||
return "kafka"
|
return "kafka"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *KafkaQueue) Initialize(configuration util.Configuration) (err error) {
|
func (k *KafkaQueue) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
glog.V(0).Infof("filer.notification.kafka.hosts: %v\n", configuration.GetStringSlice("hosts"))
|
glog.V(0).Infof("filer.notification.kafka.hosts: %v\n", configuration.GetStringSlice(prefix+"hosts"))
|
||||||
glog.V(0).Infof("filer.notification.kafka.topic: %v\n", configuration.GetString("topic"))
|
glog.V(0).Infof("filer.notification.kafka.topic: %v\n", configuration.GetString(prefix+"topic"))
|
||||||
return k.initialize(
|
return k.initialize(
|
||||||
configuration.GetStringSlice("hosts"),
|
configuration.GetStringSlice(prefix+"hosts"),
|
||||||
configuration.GetString("topic"),
|
configuration.GetString(prefix+"topic"),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -76,7 +76,7 @@ func (k *KafkaQueue) handleError() {
|
|||||||
for {
|
for {
|
||||||
err := <-k.producer.Errors()
|
err := <-k.producer.Errors()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
glog.Errorf("producer message error, partition:%d offset:%d key:%v valus:%s error(%v) topic:%s", err.Msg.Partition, err.Msg.Offset, err.Msg.Key, err.Msg.Value, err.Err, k.topic)
|
glog.Errorf("producer message error, partition:%d offset:%d key:%v value:%s error(%v) topic:%s", err.Msg.Partition, err.Msg.Offset, err.Msg.Key, err.Msg.Value, err.Err, k.topic)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -18,7 +18,7 @@ func (k *LogQueue) GetName() string {
|
|||||||
return "log"
|
return "log"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *LogQueue) Initialize(configuration util.Configuration) (err error) {
|
func (k *LogQueue) Initialize(configuration util.Configuration, prefix string) (err error) {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -11,13 +11,14 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type VolumeAssignRequest struct {
|
type VolumeAssignRequest struct {
|
||||||
Count uint64
|
Count uint64
|
||||||
Replication string
|
Replication string
|
||||||
Collection string
|
Collection string
|
||||||
Ttl string
|
Ttl string
|
||||||
DataCenter string
|
DataCenter string
|
||||||
Rack string
|
Rack string
|
||||||
DataNode string
|
DataNode string
|
||||||
|
WritableVolumeCount uint32
|
||||||
}
|
}
|
||||||
|
|
||||||
type AssignResult struct {
|
type AssignResult struct {
|
||||||
@ -43,16 +44,17 @@ func Assign(server string, grpcDialOption grpc.DialOption, primaryRequest *Volum
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
lastError = WithMasterServerClient(server, grpcDialOption, func(masterClient master_pb.SeaweedClient) error {
|
lastError = WithMasterServerClient(server, grpcDialOption, func(ctx context.Context, masterClient master_pb.SeaweedClient) error {
|
||||||
|
|
||||||
req := &master_pb.AssignRequest{
|
req := &master_pb.AssignRequest{
|
||||||
Count: primaryRequest.Count,
|
Count: primaryRequest.Count,
|
||||||
Replication: primaryRequest.Replication,
|
Replication: primaryRequest.Replication,
|
||||||
Collection: primaryRequest.Collection,
|
Collection: primaryRequest.Collection,
|
||||||
Ttl: primaryRequest.Ttl,
|
Ttl: primaryRequest.Ttl,
|
||||||
DataCenter: primaryRequest.DataCenter,
|
DataCenter: primaryRequest.DataCenter,
|
||||||
Rack: primaryRequest.Rack,
|
Rack: primaryRequest.Rack,
|
||||||
DataNode: primaryRequest.DataNode,
|
DataNode: primaryRequest.DataNode,
|
||||||
|
WritableVolumeCount: primaryRequest.WritableVolumeCount,
|
||||||
}
|
}
|
||||||
resp, grpcErr := masterClient.Assign(context.Background(), req)
|
resp, grpcErr := masterClient.Assign(context.Background(), req)
|
||||||
if grpcErr != nil {
|
if grpcErr != nil {
|
||||||
|
@ -4,12 +4,14 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/chrislusf/seaweedfs/weed/glog"
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb"
|
|
||||||
"google.golang.org/grpc"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/glog"
|
||||||
|
"github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb"
|
||||||
)
|
)
|
||||||
|
|
||||||
type DeleteResult struct {
|
type DeleteResult struct {
|
||||||
@ -94,7 +96,7 @@ func DeleteFilesWithLookupVolumeId(grpcDialOption grpc.DialOption, fileIds []str
|
|||||||
|
|
||||||
if deleteResults, deleteErr := DeleteFilesAtOneVolumeServer(server, grpcDialOption, fidList); deleteErr != nil {
|
if deleteResults, deleteErr := DeleteFilesAtOneVolumeServer(server, grpcDialOption, fidList); deleteErr != nil {
|
||||||
err = deleteErr
|
err = deleteErr
|
||||||
} else {
|
} else if deleteResults != nil {
|
||||||
resultChan <- deleteResults
|
resultChan <- deleteResults
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -107,7 +109,7 @@ func DeleteFilesWithLookupVolumeId(grpcDialOption grpc.DialOption, fileIds []str
|
|||||||
ret = append(ret, result...)
|
ret = append(ret, result...)
|
||||||
}
|
}
|
||||||
|
|
||||||
glog.V(0).Infof("deleted %d items", len(ret))
|
glog.V(1).Infof("deleted %d items", len(ret))
|
||||||
|
|
||||||
return ret, err
|
return ret, err
|
||||||
}
|
}
|
||||||
@ -115,7 +117,7 @@ func DeleteFilesWithLookupVolumeId(grpcDialOption grpc.DialOption, fileIds []str
|
|||||||
// DeleteFilesAtOneVolumeServer deletes a list of files that is on one volume server via gRpc
|
// DeleteFilesAtOneVolumeServer deletes a list of files that is on one volume server via gRpc
|
||||||
func DeleteFilesAtOneVolumeServer(volumeServer string, grpcDialOption grpc.DialOption, fileIds []string) (ret []*volume_server_pb.DeleteResult, err error) {
|
func DeleteFilesAtOneVolumeServer(volumeServer string, grpcDialOption grpc.DialOption, fileIds []string) (ret []*volume_server_pb.DeleteResult, err error) {
|
||||||
|
|
||||||
err = WithVolumeServerClient(volumeServer, grpcDialOption, func(volumeServerClient volume_server_pb.VolumeServerClient) error {
|
err = WithVolumeServerClient(volumeServer, grpcDialOption, func(ctx context.Context, volumeServerClient volume_server_pb.VolumeServerClient) error {
|
||||||
|
|
||||||
req := &volume_server_pb.BatchDeleteRequest{
|
req := &volume_server_pb.BatchDeleteRequest{
|
||||||
FileIds: fileIds,
|
FileIds: fileIds,
|
||||||
|
@ -12,7 +12,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
func WithVolumeServerClient(volumeServer string, grpcDialOption grpc.DialOption, fn func(volume_server_pb.VolumeServerClient) error) error {
|
func WithVolumeServerClient(volumeServer string, grpcDialOption grpc.DialOption, fn func(context.Context, volume_server_pb.VolumeServerClient) error) error {
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
@ -21,9 +21,9 @@ func WithVolumeServerClient(volumeServer string, grpcDialOption grpc.DialOption,
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return util.WithCachedGrpcClient(ctx, func(grpcConnection *grpc.ClientConn) error {
|
return util.WithCachedGrpcClient(ctx, func(ctx2 context.Context, grpcConnection *grpc.ClientConn) error {
|
||||||
client := volume_server_pb.NewVolumeServerClient(grpcConnection)
|
client := volume_server_pb.NewVolumeServerClient(grpcConnection)
|
||||||
return fn(client)
|
return fn(ctx2, client)
|
||||||
}, grpcAddress, grpcDialOption)
|
}, grpcAddress, grpcDialOption)
|
||||||
|
|
||||||
}
|
}
|
||||||
@ -38,7 +38,7 @@ func toVolumeServerGrpcAddress(volumeServer string) (grpcAddress string, err err
|
|||||||
return fmt.Sprintf("%s:%d", volumeServer[0:sepIndex], port+10000), nil
|
return fmt.Sprintf("%s:%d", volumeServer[0:sepIndex], port+10000), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func WithMasterServerClient(masterServer string, grpcDialOption grpc.DialOption, fn func(masterClient master_pb.SeaweedClient) error) error {
|
func WithMasterServerClient(masterServer string, grpcDialOption grpc.DialOption, fn func(ctx2 context.Context, masterClient master_pb.SeaweedClient) error) error {
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
@ -47,9 +47,9 @@ func WithMasterServerClient(masterServer string, grpcDialOption grpc.DialOption,
|
|||||||
return fmt.Errorf("failed to parse master grpc %v: %v", masterServer, parseErr)
|
return fmt.Errorf("failed to parse master grpc %v: %v", masterServer, parseErr)
|
||||||
}
|
}
|
||||||
|
|
||||||
return util.WithCachedGrpcClient(ctx, func(grpcConnection *grpc.ClientConn) error {
|
return util.WithCachedGrpcClient(ctx, func(ctx2 context.Context, grpcConnection *grpc.ClientConn) error {
|
||||||
client := master_pb.NewSeaweedClient(grpcConnection)
|
client := master_pb.NewSeaweedClient(grpcConnection)
|
||||||
return fn(client)
|
return fn(ctx2, client)
|
||||||
}, masterGrpcAddress, grpcDialOption)
|
}, masterGrpcAddress, grpcDialOption)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -99,12 +99,12 @@ func LookupVolumeIds(server string, grpcDialOption grpc.DialOption, vids []strin
|
|||||||
|
|
||||||
//only query unknown_vids
|
//only query unknown_vids
|
||||||
|
|
||||||
err := WithMasterServerClient(server, grpcDialOption, func(masterClient master_pb.SeaweedClient) error {
|
err := WithMasterServerClient(server, grpcDialOption, func(ctx context.Context, masterClient master_pb.SeaweedClient) error {
|
||||||
|
|
||||||
req := &master_pb.LookupVolumeRequest{
|
req := &master_pb.LookupVolumeRequest{
|
||||||
VolumeIds: unknown_vids,
|
VolumeIds: unknown_vids,
|
||||||
}
|
}
|
||||||
resp, grpcErr := masterClient.LookupVolume(context.Background(), req)
|
resp, grpcErr := masterClient.LookupVolume(ctx, req)
|
||||||
if grpcErr != nil {
|
if grpcErr != nil {
|
||||||
return grpcErr
|
return grpcErr
|
||||||
}
|
}
|
||||||
|
@ -9,9 +9,9 @@ import (
|
|||||||
|
|
||||||
func Statistics(server string, grpcDialOption grpc.DialOption, req *master_pb.StatisticsRequest) (resp *master_pb.StatisticsResponse, err error) {
|
func Statistics(server string, grpcDialOption grpc.DialOption, req *master_pb.StatisticsRequest) (resp *master_pb.StatisticsResponse, err error) {
|
||||||
|
|
||||||
err = WithMasterServerClient(server, grpcDialOption, func(masterClient master_pb.SeaweedClient) error {
|
err = WithMasterServerClient(server, grpcDialOption, func(ctx context.Context, masterClient master_pb.SeaweedClient) error {
|
||||||
|
|
||||||
grpcResponse, grpcErr := masterClient.Statistics(context.Background(), req)
|
grpcResponse, grpcErr := masterClient.Statistics(ctx, req)
|
||||||
if grpcErr != nil {
|
if grpcErr != nil {
|
||||||
return grpcErr
|
return grpcErr
|
||||||
}
|
}
|
||||||
|
@ -203,7 +203,7 @@ func upload_one_chunk(filename string, reader io.Reader, master,
|
|||||||
) (size uint32, e error) {
|
) (size uint32, e error) {
|
||||||
glog.V(4).Info("Uploading part ", filename, " to ", fileUrl, "...")
|
glog.V(4).Info("Uploading part ", filename, " to ", fileUrl, "...")
|
||||||
uploadResult, uploadError := Upload(fileUrl, filename, reader, false,
|
uploadResult, uploadError := Upload(fileUrl, filename, reader, false,
|
||||||
"application/octet-stream", nil, jwt)
|
"", nil, jwt)
|
||||||
if uploadError != nil {
|
if uploadError != nil {
|
||||||
return 0, uploadError
|
return 0, uploadError
|
||||||
}
|
}
|
||||||
|
@ -8,9 +8,9 @@ import (
|
|||||||
|
|
||||||
func GetVolumeSyncStatus(server string, grpcDialOption grpc.DialOption, vid uint32) (resp *volume_server_pb.VolumeSyncStatusResponse, err error) {
|
func GetVolumeSyncStatus(server string, grpcDialOption grpc.DialOption, vid uint32) (resp *volume_server_pb.VolumeSyncStatusResponse, err error) {
|
||||||
|
|
||||||
WithVolumeServerClient(server, grpcDialOption, func(client volume_server_pb.VolumeServerClient) error {
|
WithVolumeServerClient(server, grpcDialOption, func(ctx context.Context, client volume_server_pb.VolumeServerClient) error {
|
||||||
|
|
||||||
resp, err = client.VolumeSyncStatus(context.Background(), &volume_server_pb.VolumeSyncStatusRequest{
|
resp, err = client.VolumeSyncStatus(ctx, &volume_server_pb.VolumeSyncStatusRequest{
|
||||||
VolumeId: vid,
|
VolumeId: vid,
|
||||||
})
|
})
|
||||||
return nil
|
return nil
|
||||||
|
@ -26,9 +26,9 @@ func TailVolume(master string, grpcDialOption grpc.DialOption, vid needle.Volume
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TailVolumeFromSource(volumeServer string, grpcDialOption grpc.DialOption, vid needle.VolumeId, sinceNs uint64, idleTimeoutSeconds int, fn func(n *needle.Needle) error) error {
|
func TailVolumeFromSource(volumeServer string, grpcDialOption grpc.DialOption, vid needle.VolumeId, sinceNs uint64, idleTimeoutSeconds int, fn func(n *needle.Needle) error) error {
|
||||||
return WithVolumeServerClient(volumeServer, grpcDialOption, func(client volume_server_pb.VolumeServerClient) error {
|
return WithVolumeServerClient(volumeServer, grpcDialOption, func(ctx context.Context, client volume_server_pb.VolumeServerClient) error {
|
||||||
|
|
||||||
stream, err := client.VolumeTailSender(context.Background(), &volume_server_pb.VolumeTailSenderRequest{
|
stream, err := client.VolumeTailSender(ctx, &volume_server_pb.VolumeTailSenderRequest{
|
||||||
VolumeId: uint32(vid),
|
VolumeId: uint32(vid),
|
||||||
SinceNs: sinceNs,
|
SinceNs: sinceNs,
|
||||||
IdleTimeoutSeconds: uint32(idleTimeoutSeconds),
|
IdleTimeoutSeconds: uint32(idleTimeoutSeconds),
|
||||||
|
@ -12,7 +12,7 @@ service SeaweedFiler {
|
|||||||
rpc LookupDirectoryEntry (LookupDirectoryEntryRequest) returns (LookupDirectoryEntryResponse) {
|
rpc LookupDirectoryEntry (LookupDirectoryEntryRequest) returns (LookupDirectoryEntryResponse) {
|
||||||
}
|
}
|
||||||
|
|
||||||
rpc ListEntries (ListEntriesRequest) returns (ListEntriesResponse) {
|
rpc ListEntries (ListEntriesRequest) returns (stream ListEntriesResponse) {
|
||||||
}
|
}
|
||||||
|
|
||||||
rpc CreateEntry (CreateEntryRequest) returns (CreateEntryResponse) {
|
rpc CreateEntry (CreateEntryRequest) returns (CreateEntryResponse) {
|
||||||
@ -64,7 +64,7 @@ message ListEntriesRequest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message ListEntriesResponse {
|
message ListEntriesResponse {
|
||||||
repeated Entry entries = 1;
|
Entry entry = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message Entry {
|
message Entry {
|
||||||
@ -123,9 +123,11 @@ message FuseAttributes {
|
|||||||
message CreateEntryRequest {
|
message CreateEntryRequest {
|
||||||
string directory = 1;
|
string directory = 1;
|
||||||
Entry entry = 2;
|
Entry entry = 2;
|
||||||
|
bool o_excl = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
message CreateEntryResponse {
|
message CreateEntryResponse {
|
||||||
|
string error = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message UpdateEntryRequest {
|
message UpdateEntryRequest {
|
||||||
|
@ -151,7 +151,7 @@ func (m *ListEntriesRequest) GetLimit() uint32 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ListEntriesResponse struct {
|
type ListEntriesResponse struct {
|
||||||
Entries []*Entry `protobuf:"bytes,1,rep,name=entries" json:"entries,omitempty"`
|
Entry *Entry `protobuf:"bytes,1,opt,name=entry" json:"entry,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *ListEntriesResponse) Reset() { *m = ListEntriesResponse{} }
|
func (m *ListEntriesResponse) Reset() { *m = ListEntriesResponse{} }
|
||||||
@ -159,9 +159,9 @@ func (m *ListEntriesResponse) String() string { return proto.CompactT
|
|||||||
func (*ListEntriesResponse) ProtoMessage() {}
|
func (*ListEntriesResponse) ProtoMessage() {}
|
||||||
func (*ListEntriesResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
|
func (*ListEntriesResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
|
||||||
|
|
||||||
func (m *ListEntriesResponse) GetEntries() []*Entry {
|
func (m *ListEntriesResponse) GetEntry() *Entry {
|
||||||
if m != nil {
|
if m != nil {
|
||||||
return m.Entries
|
return m.Entry
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -497,6 +497,7 @@ func (m *FuseAttributes) GetSymlinkTarget() string {
|
|||||||
type CreateEntryRequest struct {
|
type CreateEntryRequest struct {
|
||||||
Directory string `protobuf:"bytes,1,opt,name=directory" json:"directory,omitempty"`
|
Directory string `protobuf:"bytes,1,opt,name=directory" json:"directory,omitempty"`
|
||||||
Entry *Entry `protobuf:"bytes,2,opt,name=entry" json:"entry,omitempty"`
|
Entry *Entry `protobuf:"bytes,2,opt,name=entry" json:"entry,omitempty"`
|
||||||
|
OExcl bool `protobuf:"varint,3,opt,name=o_excl,json=oExcl" json:"o_excl,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *CreateEntryRequest) Reset() { *m = CreateEntryRequest{} }
|
func (m *CreateEntryRequest) Reset() { *m = CreateEntryRequest{} }
|
||||||
@ -518,7 +519,15 @@ func (m *CreateEntryRequest) GetEntry() *Entry {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (m *CreateEntryRequest) GetOExcl() bool {
|
||||||
|
if m != nil {
|
||||||
|
return m.OExcl
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
type CreateEntryResponse struct {
|
type CreateEntryResponse struct {
|
||||||
|
Error string `protobuf:"bytes,1,opt,name=error" json:"error,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *CreateEntryResponse) Reset() { *m = CreateEntryResponse{} }
|
func (m *CreateEntryResponse) Reset() { *m = CreateEntryResponse{} }
|
||||||
@ -526,6 +535,13 @@ func (m *CreateEntryResponse) String() string { return proto.CompactT
|
|||||||
func (*CreateEntryResponse) ProtoMessage() {}
|
func (*CreateEntryResponse) ProtoMessage() {}
|
||||||
func (*CreateEntryResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
|
func (*CreateEntryResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
|
||||||
|
|
||||||
|
func (m *CreateEntryResponse) GetError() string {
|
||||||
|
if m != nil {
|
||||||
|
return m.Error
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
type UpdateEntryRequest struct {
|
type UpdateEntryRequest struct {
|
||||||
Directory string `protobuf:"bytes,1,opt,name=directory" json:"directory,omitempty"`
|
Directory string `protobuf:"bytes,1,opt,name=directory" json:"directory,omitempty"`
|
||||||
Entry *Entry `protobuf:"bytes,2,opt,name=entry" json:"entry,omitempty"`
|
Entry *Entry `protobuf:"bytes,2,opt,name=entry" json:"entry,omitempty"`
|
||||||
@ -1036,7 +1052,7 @@ const _ = grpc.SupportPackageIsVersion4
|
|||||||
|
|
||||||
type SeaweedFilerClient interface {
|
type SeaweedFilerClient interface {
|
||||||
LookupDirectoryEntry(ctx context.Context, in *LookupDirectoryEntryRequest, opts ...grpc.CallOption) (*LookupDirectoryEntryResponse, error)
|
LookupDirectoryEntry(ctx context.Context, in *LookupDirectoryEntryRequest, opts ...grpc.CallOption) (*LookupDirectoryEntryResponse, error)
|
||||||
ListEntries(ctx context.Context, in *ListEntriesRequest, opts ...grpc.CallOption) (*ListEntriesResponse, error)
|
ListEntries(ctx context.Context, in *ListEntriesRequest, opts ...grpc.CallOption) (SeaweedFiler_ListEntriesClient, error)
|
||||||
CreateEntry(ctx context.Context, in *CreateEntryRequest, opts ...grpc.CallOption) (*CreateEntryResponse, error)
|
CreateEntry(ctx context.Context, in *CreateEntryRequest, opts ...grpc.CallOption) (*CreateEntryResponse, error)
|
||||||
UpdateEntry(ctx context.Context, in *UpdateEntryRequest, opts ...grpc.CallOption) (*UpdateEntryResponse, error)
|
UpdateEntry(ctx context.Context, in *UpdateEntryRequest, opts ...grpc.CallOption) (*UpdateEntryResponse, error)
|
||||||
DeleteEntry(ctx context.Context, in *DeleteEntryRequest, opts ...grpc.CallOption) (*DeleteEntryResponse, error)
|
DeleteEntry(ctx context.Context, in *DeleteEntryRequest, opts ...grpc.CallOption) (*DeleteEntryResponse, error)
|
||||||
@ -1065,13 +1081,36 @@ func (c *seaweedFilerClient) LookupDirectoryEntry(ctx context.Context, in *Looku
|
|||||||
return out, nil
|
return out, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *seaweedFilerClient) ListEntries(ctx context.Context, in *ListEntriesRequest, opts ...grpc.CallOption) (*ListEntriesResponse, error) {
|
func (c *seaweedFilerClient) ListEntries(ctx context.Context, in *ListEntriesRequest, opts ...grpc.CallOption) (SeaweedFiler_ListEntriesClient, error) {
|
||||||
out := new(ListEntriesResponse)
|
stream, err := grpc.NewClientStream(ctx, &_SeaweedFiler_serviceDesc.Streams[0], c.cc, "/filer_pb.SeaweedFiler/ListEntries", opts...)
|
||||||
err := grpc.Invoke(ctx, "/filer_pb.SeaweedFiler/ListEntries", in, out, c.cc, opts...)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return out, nil
|
x := &seaweedFilerListEntriesClient{stream}
|
||||||
|
if err := x.ClientStream.SendMsg(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := x.ClientStream.CloseSend(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return x, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type SeaweedFiler_ListEntriesClient interface {
|
||||||
|
Recv() (*ListEntriesResponse, error)
|
||||||
|
grpc.ClientStream
|
||||||
|
}
|
||||||
|
|
||||||
|
type seaweedFilerListEntriesClient struct {
|
||||||
|
grpc.ClientStream
|
||||||
|
}
|
||||||
|
|
||||||
|
func (x *seaweedFilerListEntriesClient) Recv() (*ListEntriesResponse, error) {
|
||||||
|
m := new(ListEntriesResponse)
|
||||||
|
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *seaweedFilerClient) CreateEntry(ctx context.Context, in *CreateEntryRequest, opts ...grpc.CallOption) (*CreateEntryResponse, error) {
|
func (c *seaweedFilerClient) CreateEntry(ctx context.Context, in *CreateEntryRequest, opts ...grpc.CallOption) (*CreateEntryResponse, error) {
|
||||||
@ -1159,7 +1198,7 @@ func (c *seaweedFilerClient) GetFilerConfiguration(ctx context.Context, in *GetF
|
|||||||
|
|
||||||
type SeaweedFilerServer interface {
|
type SeaweedFilerServer interface {
|
||||||
LookupDirectoryEntry(context.Context, *LookupDirectoryEntryRequest) (*LookupDirectoryEntryResponse, error)
|
LookupDirectoryEntry(context.Context, *LookupDirectoryEntryRequest) (*LookupDirectoryEntryResponse, error)
|
||||||
ListEntries(context.Context, *ListEntriesRequest) (*ListEntriesResponse, error)
|
ListEntries(*ListEntriesRequest, SeaweedFiler_ListEntriesServer) error
|
||||||
CreateEntry(context.Context, *CreateEntryRequest) (*CreateEntryResponse, error)
|
CreateEntry(context.Context, *CreateEntryRequest) (*CreateEntryResponse, error)
|
||||||
UpdateEntry(context.Context, *UpdateEntryRequest) (*UpdateEntryResponse, error)
|
UpdateEntry(context.Context, *UpdateEntryRequest) (*UpdateEntryResponse, error)
|
||||||
DeleteEntry(context.Context, *DeleteEntryRequest) (*DeleteEntryResponse, error)
|
DeleteEntry(context.Context, *DeleteEntryRequest) (*DeleteEntryResponse, error)
|
||||||
@ -1193,22 +1232,25 @@ func _SeaweedFiler_LookupDirectoryEntry_Handler(srv interface{}, ctx context.Con
|
|||||||
return interceptor(ctx, in, info, handler)
|
return interceptor(ctx, in, info, handler)
|
||||||
}
|
}
|
||||||
|
|
||||||
func _SeaweedFiler_ListEntries_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
func _SeaweedFiler_ListEntries_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||||
in := new(ListEntriesRequest)
|
m := new(ListEntriesRequest)
|
||||||
if err := dec(in); err != nil {
|
if err := stream.RecvMsg(m); err != nil {
|
||||||
return nil, err
|
return err
|
||||||
}
|
}
|
||||||
if interceptor == nil {
|
return srv.(SeaweedFilerServer).ListEntries(m, &seaweedFilerListEntriesServer{stream})
|
||||||
return srv.(SeaweedFilerServer).ListEntries(ctx, in)
|
}
|
||||||
}
|
|
||||||
info := &grpc.UnaryServerInfo{
|
type SeaweedFiler_ListEntriesServer interface {
|
||||||
Server: srv,
|
Send(*ListEntriesResponse) error
|
||||||
FullMethod: "/filer_pb.SeaweedFiler/ListEntries",
|
grpc.ServerStream
|
||||||
}
|
}
|
||||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
|
||||||
return srv.(SeaweedFilerServer).ListEntries(ctx, req.(*ListEntriesRequest))
|
type seaweedFilerListEntriesServer struct {
|
||||||
}
|
grpc.ServerStream
|
||||||
return interceptor(ctx, in, info, handler)
|
}
|
||||||
|
|
||||||
|
func (x *seaweedFilerListEntriesServer) Send(m *ListEntriesResponse) error {
|
||||||
|
return x.ServerStream.SendMsg(m)
|
||||||
}
|
}
|
||||||
|
|
||||||
func _SeaweedFiler_CreateEntry_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
func _SeaweedFiler_CreateEntry_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
@ -1381,10 +1423,6 @@ var _SeaweedFiler_serviceDesc = grpc.ServiceDesc{
|
|||||||
MethodName: "LookupDirectoryEntry",
|
MethodName: "LookupDirectoryEntry",
|
||||||
Handler: _SeaweedFiler_LookupDirectoryEntry_Handler,
|
Handler: _SeaweedFiler_LookupDirectoryEntry_Handler,
|
||||||
},
|
},
|
||||||
{
|
|
||||||
MethodName: "ListEntries",
|
|
||||||
Handler: _SeaweedFiler_ListEntries_Handler,
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
MethodName: "CreateEntry",
|
MethodName: "CreateEntry",
|
||||||
Handler: _SeaweedFiler_CreateEntry_Handler,
|
Handler: _SeaweedFiler_CreateEntry_Handler,
|
||||||
@ -1422,113 +1460,121 @@ var _SeaweedFiler_serviceDesc = grpc.ServiceDesc{
|
|||||||
Handler: _SeaweedFiler_GetFilerConfiguration_Handler,
|
Handler: _SeaweedFiler_GetFilerConfiguration_Handler,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
Streams: []grpc.StreamDesc{},
|
Streams: []grpc.StreamDesc{
|
||||||
|
{
|
||||||
|
StreamName: "ListEntries",
|
||||||
|
Handler: _SeaweedFiler_ListEntries_Handler,
|
||||||
|
ServerStreams: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
Metadata: "filer.proto",
|
Metadata: "filer.proto",
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() { proto.RegisterFile("filer.proto", fileDescriptor0) }
|
func init() { proto.RegisterFile("filer.proto", fileDescriptor0) }
|
||||||
|
|
||||||
var fileDescriptor0 = []byte{
|
var fileDescriptor0 = []byte{
|
||||||
// 1608 bytes of a gzipped FileDescriptorProto
|
// 1633 bytes of a gzipped FileDescriptorProto
|
||||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x58, 0x49, 0x6f, 0xdc, 0x46,
|
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x18, 0x4b, 0x6f, 0xdc, 0xc6,
|
||||||
0x16, 0x36, 0x7b, 0xe7, 0xeb, 0x6e, 0x5b, 0x2a, 0xc9, 0x36, 0xdd, 0x5a, 0x46, 0xa6, 0xc6, 0x1e,
|
0x59, 0xdc, 0x37, 0xbf, 0xdd, 0xb5, 0xa5, 0x59, 0xc9, 0x5e, 0xaf, 0x1e, 0x95, 0xa9, 0xda, 0x55,
|
||||||
0x19, 0x63, 0x68, 0x0c, 0x8f, 0x0f, 0xf6, 0x18, 0x03, 0xc4, 0xd6, 0x12, 0x08, 0x91, 0x17, 0x50,
|
0x61, 0x43, 0x35, 0x54, 0x1f, 0xec, 0xba, 0x3d, 0xd8, 0x7a, 0x14, 0x42, 0xe5, 0x07, 0x28, 0xbb,
|
||||||
0x76, 0x90, 0x20, 0x40, 0x08, 0x8a, 0xac, 0x6e, 0x55, 0x44, 0xb2, 0x3a, 0xc5, 0xa2, 0x24, 0xe7,
|
0x68, 0x11, 0x20, 0x04, 0x45, 0xce, 0xae, 0x26, 0x22, 0x39, 0x9b, 0xe1, 0x50, 0x92, 0xf3, 0x13,
|
||||||
0x27, 0xe4, 0x98, 0x63, 0x80, 0x9c, 0xf3, 0x27, 0x82, 0x5c, 0x02, 0xff, 0x9d, 0x1c, 0x73, 0x0e,
|
0x72, 0xcc, 0x31, 0x40, 0xce, 0xf9, 0x13, 0x41, 0x2e, 0x41, 0x90, 0x7f, 0x93, 0x63, 0xce, 0xc1,
|
||||||
0xaa, 0x8a, 0x64, 0x17, 0x9b, 0x2d, 0xc9, 0x41, 0xe0, 0x1b, 0xeb, 0x2d, 0xdf, 0x5b, 0xea, 0x2d,
|
0xcc, 0x90, 0xdc, 0xe1, 0x72, 0x25, 0xd9, 0x08, 0x7c, 0x9b, 0xf9, 0xde, 0xef, 0x6f, 0x48, 0x68,
|
||||||
0xd5, 0x0d, 0xdd, 0x21, 0x09, 0x31, 0xdb, 0x1c, 0x33, 0xca, 0x29, 0xea, 0xc8, 0x83, 0x3b, 0x3e,
|
0x0f, 0x49, 0x80, 0xd9, 0xd6, 0x98, 0x51, 0x4e, 0x51, 0x4b, 0x5e, 0x9c, 0xf1, 0xb1, 0xf5, 0x1a,
|
||||||
0xb4, 0x5f, 0xc1, 0xd2, 0x3e, 0xa5, 0xc7, 0xe9, 0x78, 0x9b, 0x30, 0xec, 0x73, 0xca, 0xde, 0xed,
|
0x96, 0x0f, 0x29, 0x3d, 0x4d, 0xc6, 0xbb, 0x84, 0x61, 0x8f, 0x53, 0xf6, 0x7e, 0x2f, 0xe2, 0xec,
|
||||||
0xc4, 0x9c, 0xbd, 0x73, 0xf0, 0xb7, 0x29, 0x4e, 0x38, 0x5a, 0x06, 0x33, 0xc8, 0x19, 0x96, 0xb1,
|
0xbd, 0x8d, 0xbf, 0x4c, 0x70, 0xcc, 0xd1, 0x0a, 0x98, 0x7e, 0x86, 0xe8, 0x1b, 0xeb, 0xc6, 0xa6,
|
||||||
0x66, 0x6c, 0x98, 0xce, 0x84, 0x80, 0x10, 0x34, 0x62, 0x2f, 0xc2, 0x56, 0x4d, 0x32, 0xe4, 0xb7,
|
0x69, 0x4f, 0x00, 0x08, 0x41, 0x2d, 0x72, 0x43, 0xdc, 0xaf, 0x48, 0x84, 0x3c, 0x5b, 0x7b, 0xb0,
|
||||||
0xbd, 0x03, 0xcb, 0xb3, 0x01, 0x93, 0x31, 0x8d, 0x13, 0x8c, 0xee, 0x40, 0x13, 0x0b, 0x82, 0x44,
|
0x32, 0x5b, 0x60, 0x3c, 0xa6, 0x51, 0x8c, 0xd1, 0x3d, 0xa8, 0x63, 0x01, 0x90, 0xd2, 0xda, 0xdb,
|
||||||
0xeb, 0x3e, 0xbc, 0xb6, 0x99, 0xbb, 0xb2, 0xa9, 0xe4, 0x14, 0xd7, 0xfe, 0xd5, 0x00, 0xb4, 0x4f,
|
0x37, 0xb7, 0x32, 0x53, 0xb6, 0x14, 0x9d, 0xc2, 0x5a, 0x3f, 0x1a, 0x80, 0x0e, 0x49, 0xcc, 0x05,
|
||||||
0x12, 0x2e, 0x88, 0x04, 0x27, 0x1f, 0xe6, 0xcf, 0x0d, 0x68, 0x8d, 0x19, 0x1e, 0x92, 0xb3, 0xcc,
|
0x90, 0xe0, 0xf8, 0xc3, 0xec, 0xb9, 0x05, 0x8d, 0x31, 0xc3, 0x43, 0x72, 0x91, 0x5a, 0x94, 0xde,
|
||||||
0xa3, 0xec, 0x84, 0xee, 0xc3, 0x7c, 0xc2, 0x3d, 0xc6, 0x77, 0x19, 0x8d, 0x76, 0x49, 0x88, 0x5f,
|
0xd0, 0x43, 0x58, 0x88, 0xb9, 0xcb, 0xf8, 0x3e, 0xa3, 0xe1, 0x3e, 0x09, 0xf0, 0x2b, 0x61, 0x74,
|
||||||
0x0a, 0xa7, 0xeb, 0x52, 0xa4, 0xca, 0x40, 0x9b, 0x80, 0x48, 0xec, 0x87, 0x69, 0x42, 0x4e, 0xf0,
|
0x55, 0x92, 0x94, 0x11, 0x68, 0x0b, 0x10, 0x89, 0xbc, 0x20, 0x89, 0xc9, 0x19, 0x3e, 0xca, 0xb0,
|
||||||
0x41, 0xce, 0xb5, 0x1a, 0x6b, 0xc6, 0x46, 0xc7, 0x99, 0xc1, 0x41, 0x8b, 0xd0, 0x0c, 0x49, 0x44,
|
0xfd, 0xda, 0xba, 0xb1, 0xd9, 0xb2, 0x67, 0x60, 0xd0, 0x22, 0xd4, 0x03, 0x12, 0x12, 0xde, 0xaf,
|
||||||
0xb8, 0xd5, 0x5c, 0x33, 0x36, 0xfa, 0x8e, 0x3a, 0xd8, 0x9f, 0xc0, 0x42, 0xc9, 0xff, 0x2c, 0xfc,
|
0xaf, 0x1b, 0x9b, 0x5d, 0x5b, 0x5d, 0xac, 0x7f, 0x42, 0xaf, 0x60, 0xff, 0xc7, 0xb9, 0xff, 0x5d,
|
||||||
0x7b, 0xd0, 0xc6, 0x8a, 0x64, 0x19, 0x6b, 0xf5, 0x59, 0x09, 0xc8, 0xf9, 0xf6, 0x4f, 0x35, 0x68,
|
0x05, 0xea, 0x12, 0x90, 0xc7, 0xd8, 0x98, 0xc4, 0x18, 0xdd, 0x85, 0x0e, 0x89, 0x9d, 0x49, 0x20,
|
||||||
0x4a, 0x52, 0x91, 0x67, 0x63, 0x92, 0x67, 0x74, 0x1b, 0x7a, 0x24, 0x71, 0x27, 0xc9, 0xa8, 0x49,
|
0x2a, 0xd2, 0xb6, 0x36, 0x89, 0xf3, 0x98, 0xa3, 0x07, 0xd0, 0xf0, 0x4e, 0x92, 0xe8, 0x34, 0xee,
|
||||||
0xff, 0xba, 0x24, 0x29, 0xf2, 0x8e, 0xfe, 0x0d, 0x2d, 0xff, 0x28, 0x8d, 0x8f, 0x13, 0xab, 0x2e,
|
0x57, 0xd7, 0xab, 0x9b, 0xed, 0xed, 0xde, 0x44, 0x91, 0x70, 0x74, 0x47, 0xe0, 0xec, 0x94, 0x04,
|
||||||
0x4d, 0x2d, 0x4c, 0x4c, 0x89, 0x60, 0xb7, 0x04, 0xcf, 0xc9, 0x44, 0xd0, 0x63, 0x00, 0x8f, 0x73,
|
0x3d, 0x01, 0x70, 0x39, 0x67, 0xe4, 0x38, 0xe1, 0x38, 0x96, 0x9e, 0xb6, 0xb7, 0xfb, 0x1a, 0x43,
|
||||||
0x46, 0x0e, 0x53, 0x8e, 0x13, 0x19, 0x6d, 0xf7, 0xa1, 0xa5, 0x29, 0xa4, 0x09, 0x7e, 0x56, 0xf0,
|
0x12, 0xe3, 0xe7, 0x39, 0xde, 0xd6, 0x68, 0xd1, 0x53, 0x68, 0xe1, 0x0b, 0x8e, 0x23, 0x1f, 0xfb,
|
||||||
0x1d, 0x4d, 0x16, 0x3d, 0x81, 0x0e, 0x3e, 0xe3, 0x38, 0x0e, 0x70, 0x60, 0x35, 0xa5, 0xa1, 0x95,
|
0xfd, 0xba, 0x54, 0xb4, 0x3a, 0xe5, 0xd1, 0xd6, 0x5e, 0x8a, 0x57, 0xfe, 0xe5, 0xe4, 0x83, 0x67,
|
||||||
0xa9, 0x98, 0x36, 0x77, 0x32, 0xbe, 0x8a, 0xb0, 0x10, 0x1f, 0x3c, 0x85, 0x7e, 0x89, 0x85, 0xe6,
|
0xd0, 0x2d, 0xa0, 0xd0, 0x3c, 0x54, 0x4f, 0x71, 0x96, 0x55, 0x71, 0x14, 0x91, 0x3d, 0x73, 0x83,
|
||||||
0xa0, 0x7e, 0x8c, 0xf3, 0x9b, 0x15, 0x9f, 0x22, 0xbb, 0x27, 0x5e, 0x98, 0xaa, 0x22, 0xeb, 0x39,
|
0x44, 0x15, 0x58, 0xc7, 0x56, 0x97, 0x7f, 0x54, 0x9e, 0x18, 0xd6, 0x2e, 0x98, 0xfb, 0x49, 0x10,
|
||||||
0xea, 0xf0, 0xbf, 0xda, 0x63, 0xc3, 0xde, 0x06, 0x73, 0x37, 0x0d, 0xc3, 0x42, 0x31, 0x20, 0x2c,
|
0xe4, 0x8c, 0x3e, 0x61, 0x19, 0xa3, 0x4f, 0xd8, 0x24, 0xca, 0x95, 0x2b, 0xa3, 0xfc, 0x83, 0x01,
|
||||||
0x57, 0x0c, 0x08, 0x9b, 0x14, 0x5a, 0xed, 0xc2, 0x42, 0xfb, 0xc5, 0x80, 0xf9, 0x9d, 0x13, 0x1c,
|
0x0b, 0x7b, 0x67, 0x38, 0xe2, 0xaf, 0x28, 0x27, 0x43, 0xe2, 0xb9, 0x9c, 0xd0, 0x08, 0x3d, 0x04,
|
||||||
0xf3, 0x97, 0x94, 0x93, 0x21, 0xf1, 0x3d, 0x4e, 0x68, 0x8c, 0xee, 0x83, 0x49, 0xc3, 0xc0, 0xbd,
|
0x93, 0x06, 0xbe, 0x73, 0x65, 0x9a, 0x5a, 0x34, 0x48, 0xad, 0x7e, 0x08, 0x66, 0x84, 0xcf, 0x9d,
|
||||||
0xb0, 0x52, 0x3b, 0x34, 0xcc, 0xbc, 0xbe, 0x0f, 0x66, 0x8c, 0x4f, 0xdd, 0x0b, 0xcd, 0x75, 0x62,
|
0x2b, 0xd5, 0xb5, 0x22, 0x7c, 0xae, 0xa8, 0x37, 0xa0, 0xeb, 0xe3, 0x00, 0x73, 0xec, 0xe4, 0xd9,
|
||||||
0x7c, 0xaa, 0xa4, 0xd7, 0xa1, 0x1f, 0xe0, 0x10, 0x73, 0xec, 0x16, 0xb7, 0x23, 0xae, 0xae, 0xa7,
|
0x11, 0xa9, 0xeb, 0x28, 0xe0, 0x8e, 0x4a, 0xc7, 0x7d, 0xb8, 0x29, 0x44, 0x8e, 0x5d, 0x86, 0x23,
|
||||||
0x88, 0x5b, 0xea, 0x3a, 0xee, 0xc2, 0x35, 0x01, 0x39, 0xf6, 0x18, 0x8e, 0xb9, 0x3b, 0xf6, 0xf8,
|
0xee, 0x8c, 0x5d, 0x7e, 0x22, 0x73, 0x62, 0xda, 0xdd, 0x08, 0x9f, 0xbf, 0x91, 0xd0, 0x37, 0x2e,
|
||||||
0x91, 0xbc, 0x13, 0xd3, 0xe9, 0xc7, 0xf8, 0xf4, 0xb5, 0xa4, 0xbe, 0xf6, 0xf8, 0x91, 0xfd, 0x87,
|
0x3f, 0xb1, 0x7e, 0x33, 0xc0, 0xcc, 0x93, 0x89, 0x6e, 0x43, 0x53, 0xa8, 0x75, 0x88, 0x9f, 0x46,
|
||||||
0x01, 0x66, 0x71, 0x99, 0xe8, 0x26, 0xb4, 0x85, 0x59, 0x97, 0x04, 0x59, 0x26, 0x5a, 0xe2, 0xb8,
|
0xa2, 0x21, 0xae, 0x07, 0xbe, 0xe8, 0x0a, 0x3a, 0x1c, 0xc6, 0x98, 0x4b, 0xf3, 0xaa, 0x76, 0x7a,
|
||||||
0x17, 0x88, 0xce, 0xa0, 0xc3, 0x61, 0x82, 0xb9, 0x74, 0xaf, 0xee, 0x64, 0x27, 0x51, 0x59, 0x09,
|
0x13, 0x95, 0x15, 0x93, 0xaf, 0x54, 0x23, 0xd4, 0x6c, 0x79, 0x16, 0x11, 0x0f, 0x39, 0x09, 0xb1,
|
||||||
0xf9, 0x4e, 0x35, 0x43, 0xc3, 0x91, 0xdf, 0x22, 0xe3, 0x11, 0x27, 0x11, 0x96, 0x06, 0xeb, 0x8e,
|
0x54, 0x58, 0xb5, 0xd5, 0x05, 0xf5, 0xa0, 0x8e, 0x1d, 0xee, 0x8e, 0x64, 0x85, 0x9b, 0x76, 0x0d,
|
||||||
0x3a, 0xa0, 0x05, 0x68, 0x62, 0x97, 0x7b, 0x23, 0x59, 0xe5, 0xa6, 0xd3, 0xc0, 0x6f, 0xbc, 0x11,
|
0xbf, 0x75, 0x47, 0xe8, 0xcf, 0x70, 0x23, 0xa6, 0x09, 0xf3, 0xb0, 0x93, 0xa9, 0x6d, 0x48, 0x6c,
|
||||||
0xfa, 0x27, 0x5c, 0x4d, 0x68, 0xca, 0x7c, 0xec, 0xe6, 0x66, 0x5b, 0x92, 0xdb, 0x53, 0xd4, 0x5d,
|
0x47, 0x41, 0xf7, 0x95, 0x72, 0x0b, 0xaa, 0x43, 0xe2, 0xf7, 0x9b, 0x32, 0x30, 0xf3, 0xc5, 0x22,
|
||||||
0x65, 0xdc, 0x86, 0xfa, 0x90, 0x04, 0x56, 0x5b, 0x26, 0x66, 0xae, 0x5c, 0x84, 0x7b, 0x81, 0x23,
|
0x3c, 0xf0, 0x6d, 0x81, 0x44, 0x7f, 0x03, 0xc8, 0x25, 0xf9, 0xfd, 0xd6, 0x25, 0xa4, 0x66, 0x26,
|
||||||
0x98, 0xe8, 0x3f, 0x00, 0x05, 0x52, 0x60, 0x75, 0xce, 0x11, 0x35, 0x73, 0xdc, 0xc0, 0xfe, 0x02,
|
0xd7, 0xb7, 0xfe, 0x07, 0x8d, 0x54, 0xfc, 0x32, 0x98, 0x67, 0x34, 0x48, 0xc2, 0xdc, 0xed, 0xae,
|
||||||
0x5a, 0x19, 0xfc, 0x12, 0x98, 0x27, 0x34, 0x4c, 0xa3, 0x22, 0xec, 0xbe, 0xd3, 0x51, 0x84, 0xbd,
|
0xdd, 0x52, 0x80, 0x03, 0x1f, 0xdd, 0x01, 0x39, 0xe7, 0x1c, 0x51, 0x55, 0x15, 0xe9, 0xa4, 0x8c,
|
||||||
0x00, 0xdd, 0x02, 0x39, 0xeb, 0x5c, 0x51, 0x55, 0x35, 0x19, 0xa4, 0xcc, 0xd0, 0x67, 0x58, 0x4e,
|
0xd0, 0x7f, 0xb0, 0x9c, 0x14, 0x1e, 0xa5, 0xa7, 0x44, 0x79, 0xdf, 0xb4, 0xd3, 0x9b, 0xf5, 0x6b,
|
||||||
0x0b, 0x9f, 0xd2, 0x63, 0xa2, 0xa2, 0x6f, 0x3b, 0xd9, 0xc9, 0xfe, 0xbd, 0x06, 0x57, 0xcb, 0xe5,
|
0x05, 0x6e, 0x14, 0xcb, 0x5d, 0xa8, 0x90, 0x52, 0x64, 0xac, 0x0c, 0x29, 0x46, 0x8a, 0x3d, 0x2a,
|
||||||
0x2e, 0x4c, 0x48, 0x14, 0x99, 0x2b, 0x43, 0xc2, 0x48, 0xd8, 0x83, 0x52, 0xbe, 0x6a, 0x7a, 0xbe,
|
0xc4, 0xab, 0xa2, 0xc7, 0x2b, 0x63, 0x09, 0xa9, 0xaf, 0x14, 0x74, 0x15, 0xcb, 0x4b, 0xea, 0x63,
|
||||||
0x72, 0x95, 0x88, 0x06, 0xca, 0x40, 0x5f, 0xa9, 0xbc, 0xa0, 0x01, 0x16, 0xd5, 0x9a, 0x92, 0x40,
|
0x51, 0xad, 0x09, 0xf1, 0x65, 0x80, 0xbb, 0xb6, 0x38, 0x0a, 0xc8, 0x88, 0xf8, 0xe9, 0xf8, 0x10,
|
||||||
0x26, 0xb8, 0xef, 0x88, 0x4f, 0x41, 0x19, 0x91, 0x20, 0x1b, 0x21, 0xe2, 0x53, 0xba, 0xc7, 0x24,
|
0x47, 0x69, 0x1e, 0x93, 0x72, 0x1b, 0x2a, 0x65, 0xea, 0x26, 0x52, 0x16, 0x0a, 0x68, 0x53, 0xe5,
|
||||||
0x6e, 0x4b, 0x5d, 0x99, 0x3a, 0x89, 0x2b, 0x8b, 0x04, 0xb5, 0xad, 0xee, 0x41, 0x7c, 0xa3, 0x35,
|
0x41, 0x9c, 0xd1, 0x3a, 0xb4, 0x19, 0x1e, 0x07, 0x69, 0xf5, 0xca, 0xf0, 0x99, 0xb6, 0x0e, 0x42,
|
||||||
0xe8, 0x32, 0x3c, 0x0e, 0xb3, 0xea, 0x95, 0xe9, 0x33, 0x1d, 0x9d, 0x84, 0x56, 0x01, 0x7c, 0x1a,
|
0x6b, 0x00, 0x1e, 0x0d, 0x02, 0xec, 0x49, 0x02, 0x53, 0x12, 0x68, 0x10, 0x51, 0x39, 0x9c, 0x07,
|
||||||
0x86, 0xd8, 0x97, 0x02, 0xa6, 0x14, 0xd0, 0x28, 0xa2, 0x72, 0x38, 0x0f, 0xdd, 0x04, 0xfb, 0x16,
|
0x4e, 0x8c, 0xbd, 0x3e, 0xac, 0x1b, 0x9b, 0x75, 0xbb, 0xc1, 0x79, 0x70, 0x84, 0x3d, 0xe1, 0x47,
|
||||||
0xac, 0x19, 0x1b, 0x4d, 0xa7, 0xc5, 0x79, 0x78, 0x80, 0x7d, 0x11, 0x47, 0x9a, 0x60, 0xe6, 0xca,
|
0x12, 0x63, 0xe6, 0xc8, 0x01, 0xd4, 0x96, 0x7c, 0x2d, 0x01, 0x90, 0x63, 0x72, 0x15, 0x60, 0xc4,
|
||||||
0x01, 0xd4, 0x95, 0x7a, 0x1d, 0x41, 0x90, 0xa3, 0x72, 0x05, 0x60, 0xc4, 0x68, 0x3a, 0x56, 0xdc,
|
0x68, 0x32, 0x56, 0xd8, 0xce, 0x7a, 0x55, 0xcc, 0x62, 0x09, 0x91, 0xe8, 0x7b, 0x70, 0x23, 0x7e,
|
||||||
0xde, 0x5a, 0x5d, 0xcc, 0x63, 0x49, 0x91, 0xec, 0x3b, 0x70, 0x35, 0x79, 0x17, 0x85, 0x24, 0x3e,
|
0x1f, 0x06, 0x24, 0x3a, 0x75, 0xb8, 0xcb, 0x46, 0x98, 0xf7, 0xbb, 0xaa, 0x86, 0x53, 0xe8, 0x5b,
|
||||||
0x76, 0xb9, 0xc7, 0x46, 0x98, 0x5b, 0x7d, 0x55, 0xc3, 0x19, 0xf5, 0x8d, 0x24, 0xda, 0x5f, 0x02,
|
0x09, 0xb4, 0xc6, 0x80, 0x76, 0x18, 0x76, 0x39, 0xfe, 0x88, 0xb5, 0xf3, 0x61, 0xdd, 0x8d, 0x96,
|
||||||
0xda, 0x62, 0xd8, 0xe3, 0xf8, 0x2f, 0xac, 0x9e, 0x0f, 0xec, 0xee, 0xeb, 0xb0, 0x50, 0x82, 0x56,
|
0xa0, 0x41, 0x1d, 0x7c, 0xe1, 0x05, 0x69, 0x93, 0xd5, 0xe9, 0xde, 0x85, 0x17, 0x58, 0x0f, 0xa0,
|
||||||
0x53, 0x58, 0x58, 0x7c, 0x3b, 0x0e, 0x3e, 0x96, 0xc5, 0x12, 0x74, 0x66, 0xf1, 0xbd, 0x01, 0x68,
|
0x57, 0xd0, 0x98, 0x0e, 0xe6, 0x45, 0xa8, 0x63, 0xc6, 0x68, 0x36, 0x46, 0xd4, 0xc5, 0xfa, 0x3f,
|
||||||
0x5b, 0x36, 0xf8, 0xdf, 0xdb, 0xaf, 0xa2, 0xe5, 0xc4, 0xdc, 0x57, 0x03, 0x24, 0xf0, 0xb8, 0x97,
|
0xa0, 0x77, 0x63, 0xff, 0x53, 0x98, 0x67, 0x2d, 0x41, 0xaf, 0x20, 0x5a, 0xd9, 0x61, 0xfd, 0x6c,
|
||||||
0x6d, 0xa6, 0x1e, 0x49, 0x14, 0xfe, 0xb6, 0xc7, 0xbd, 0x6c, 0x3b, 0x30, 0xec, 0xa7, 0x4c, 0x2c,
|
0x00, 0xda, 0x95, 0xd3, 0xe0, 0x8f, 0x2d, 0x62, 0xd1, 0x9f, 0x62, 0x49, 0xa8, 0x69, 0xe3, 0xbb,
|
||||||
0x2b, 0x59, 0x57, 0x72, 0x3b, 0x38, 0x39, 0x09, 0x3d, 0x82, 0x1b, 0x64, 0x14, 0x53, 0x86, 0x27,
|
0xdc, 0x4d, 0x57, 0x58, 0x87, 0xc4, 0x4a, 0xfe, 0xae, 0xcb, 0xdd, 0x74, 0x95, 0x30, 0xec, 0x25,
|
||||||
0x62, 0x2e, 0x66, 0x8c, 0x32, 0x59, 0x6f, 0x1d, 0x67, 0x51, 0x71, 0x0b, 0x85, 0x1d, 0xc1, 0x13,
|
0x4c, 0x6c, 0x35, 0x59, 0x84, 0x72, 0x95, 0xd8, 0x19, 0x08, 0x3d, 0x86, 0x5b, 0x64, 0x14, 0x51,
|
||||||
0xe1, 0x95, 0xc2, 0xc8, 0xc2, 0xfb, 0xd1, 0x00, 0xeb, 0x19, 0xa7, 0x11, 0xf1, 0x1d, 0x2c, 0xdc,
|
0x86, 0x27, 0x64, 0x8e, 0x0a, 0x55, 0x43, 0x12, 0x2f, 0x2a, 0x6c, 0xce, 0xb0, 0x27, 0x23, 0xb7,
|
||||||
0x2c, 0x05, 0xb9, 0x0e, 0x7d, 0x31, 0x4c, 0xa7, 0x03, 0xed, 0xd1, 0x30, 0x98, 0x2c, 0xab, 0x5b,
|
0x04, 0xbd, 0x82, 0x1b, 0xa9, 0x7b, 0xdf, 0x1a, 0xd0, 0x7f, 0xce, 0x69, 0x48, 0x3c, 0x1b, 0x0b,
|
||||||
0x20, 0xe6, 0xa9, 0xab, 0xc5, 0xdb, 0xa6, 0x61, 0x20, 0xcb, 0x68, 0x1d, 0xc4, 0xd0, 0xd3, 0xf4,
|
0x33, 0x0b, 0x4e, 0x6e, 0x40, 0x57, 0x4c, 0xde, 0x69, 0x47, 0x3b, 0x34, 0xf0, 0x27, 0x9b, 0xed,
|
||||||
0xd5, 0xea, 0xee, 0xc5, 0xf8, 0xb4, 0xa4, 0x2f, 0x84, 0xa4, 0xbe, 0x9a, 0x94, 0xed, 0x18, 0x9f,
|
0x0e, 0x88, 0xe1, 0xeb, 0x68, 0xfe, 0x36, 0x69, 0xe0, 0xcb, 0x9a, 0xdb, 0x00, 0x31, 0x21, 0x35,
|
||||||
0x0a, 0x7d, 0x7b, 0x09, 0x6e, 0xcd, 0xf0, 0x2d, 0xf3, 0xfc, 0x67, 0x03, 0x16, 0x9e, 0x25, 0x09,
|
0x7e, 0xb5, 0xe3, 0x3b, 0x11, 0x3e, 0x2f, 0xf0, 0x0b, 0x22, 0xc9, 0xaf, 0xc6, 0x6a, 0x33, 0xc2,
|
||||||
0x19, 0xc5, 0x9f, 0xcb, 0x99, 0x91, 0x3b, 0xbd, 0x08, 0x4d, 0x9f, 0xa6, 0x31, 0x97, 0xce, 0x36,
|
0xe7, 0x82, 0xdf, 0x5a, 0x86, 0x3b, 0x33, 0x6c, 0x4b, 0x2d, 0xff, 0xde, 0x80, 0xde, 0xf3, 0x38,
|
||||||
0x1d, 0x75, 0x98, 0x6a, 0xa3, 0x5a, 0xa5, 0x8d, 0xa6, 0x1a, 0xb1, 0x5e, 0x6d, 0x44, 0xad, 0xd1,
|
0x26, 0xa3, 0xe8, 0xbf, 0x72, 0xc0, 0x64, 0x46, 0x2f, 0x42, 0xdd, 0xa3, 0x49, 0xc4, 0xa5, 0xb1,
|
||||||
0x1a, 0xa5, 0x46, 0xfb, 0x07, 0x74, 0xc5, 0x75, 0xba, 0x3e, 0x8e, 0x39, 0x66, 0xd9, 0x98, 0x05,
|
0x75, 0x5b, 0x5d, 0xa6, 0x7a, 0xae, 0x52, 0xea, 0xb9, 0xa9, 0xae, 0xad, 0x96, 0xbb, 0x56, 0xeb,
|
||||||
0x41, 0xda, 0x92, 0x14, 0xfb, 0x7b, 0x03, 0x16, 0xcb, 0x9e, 0x66, 0x6f, 0x8a, 0x73, 0xa7, 0xbe,
|
0xca, 0x5a, 0xa1, 0x2b, 0xff, 0x04, 0x6d, 0x91, 0x4e, 0xc7, 0xc3, 0x11, 0xc7, 0x2c, 0x9d, 0xc9,
|
||||||
0x18, 0x33, 0x2c, 0xcc, 0xdc, 0x14, 0x9f, 0xa2, 0x61, 0xc7, 0xe9, 0x61, 0x48, 0x7c, 0x57, 0x30,
|
0x20, 0x40, 0x3b, 0x12, 0x62, 0x7d, 0x6d, 0xc0, 0x62, 0xd1, 0xd2, 0xb4, 0xc6, 0x2f, 0x5d, 0x11,
|
||||||
0x94, 0x7b, 0xa6, 0xa2, 0xbc, 0x65, 0xe1, 0x24, 0xe8, 0x86, 0x1e, 0x34, 0x82, 0x86, 0x97, 0xf2,
|
0x62, 0x26, 0xb1, 0x20, 0x35, 0x53, 0x1c, 0x45, 0x77, 0x8f, 0x93, 0xe3, 0x80, 0x78, 0x8e, 0x40,
|
||||||
0xa3, 0x7c, 0xf2, 0x8b, 0x6f, 0xfb, 0x11, 0x2c, 0xa8, 0x67, 0x5e, 0x39, 0x6b, 0x2b, 0x00, 0xc5,
|
0x28, 0xf3, 0x4c, 0x05, 0x79, 0xc7, 0x82, 0x89, 0xd3, 0x35, 0xdd, 0x69, 0x04, 0x35, 0x37, 0xe1,
|
||||||
0x2c, 0x56, 0x2f, 0x1c, 0xd3, 0x31, 0xf3, 0x61, 0x9c, 0xd8, 0xff, 0x07, 0x73, 0x9f, 0xaa, 0x44,
|
0x27, 0xd9, 0x9a, 0x10, 0x67, 0xeb, 0x31, 0xf4, 0xd4, 0x7b, 0xb0, 0x18, 0xb5, 0x55, 0x80, 0x7c,
|
||||||
0x24, 0xe8, 0x01, 0x98, 0x61, 0x7e, 0xc8, 0x1e, 0x43, 0x68, 0xd2, 0x54, 0xb9, 0x9c, 0x33, 0x11,
|
0x70, 0xc7, 0x7d, 0x43, 0x4d, 0x8f, 0x6c, 0x72, 0xc7, 0xd6, 0xbf, 0xc0, 0x3c, 0xa4, 0x2a, 0x10,
|
||||||
0xb2, 0x9f, 0x42, 0x27, 0x27, 0xe7, 0xb1, 0x19, 0xe7, 0xc5, 0x56, 0x9b, 0x8a, 0xcd, 0xfe, 0xcd,
|
0x31, 0x7a, 0x04, 0x66, 0x90, 0x5d, 0x24, 0x69, 0x7b, 0x1b, 0x4d, 0x9a, 0x2a, 0xa3, 0xb3, 0x27,
|
||||||
0x80, 0xc5, 0xb2, 0xcb, 0x59, 0xfa, 0xde, 0x42, 0xbf, 0x30, 0xe1, 0x46, 0xde, 0x38, 0xf3, 0xe5,
|
0x44, 0xd6, 0x33, 0x68, 0x65, 0xe0, 0xcc, 0x37, 0xe3, 0x32, 0xdf, 0x2a, 0x53, 0xbe, 0x59, 0x3f,
|
||||||
0x81, 0xee, 0x4b, 0x55, 0xad, 0x70, 0x30, 0x79, 0xe1, 0x8d, 0x55, 0x49, 0xf5, 0x42, 0x8d, 0x34,
|
0x19, 0xb0, 0x58, 0x34, 0x39, 0x0d, 0xdf, 0x3b, 0xe8, 0xe6, 0x2a, 0x9c, 0xd0, 0x1d, 0xa7, 0xb6,
|
||||||
0x78, 0x03, 0xf3, 0x15, 0x91, 0x19, 0xef, 0x9b, 0x7b, 0xfa, 0xfb, 0xa6, 0xf4, 0x46, 0x2b, 0xb4,
|
0x3c, 0xd2, 0x6d, 0x29, 0xb3, 0xe5, 0x06, 0xc6, 0x2f, 0xdd, 0xb1, 0x2a, 0xa9, 0x4e, 0xa0, 0x81,
|
||||||
0xf5, 0x47, 0xcf, 0x13, 0xb8, 0xa9, 0xfa, 0x6f, 0xab, 0x28, 0xba, 0x3c, 0xf7, 0xe5, 0xda, 0x34,
|
0x06, 0x6f, 0x61, 0xa1, 0x44, 0x32, 0xe3, 0x31, 0xf4, 0x57, 0xfd, 0x31, 0x54, 0x78, 0xd0, 0xe5,
|
||||||
0xa6, 0x6b, 0xd3, 0x1e, 0x80, 0x55, 0x55, 0xcd, 0xba, 0x60, 0x04, 0xf3, 0x07, 0xdc, 0xe3, 0x24,
|
0xdc, 0xfa, 0x0b, 0xe9, 0x29, 0xdc, 0x56, 0xfd, 0xb7, 0x93, 0x17, 0x5d, 0x16, 0xfb, 0x62, 0x6d,
|
||||||
0xe1, 0xc4, 0x2f, 0x1e, 0xdb, 0x53, 0xc5, 0x6c, 0x5c, 0xb6, 0x55, 0xaa, 0xed, 0x30, 0x07, 0x75,
|
0x1a, 0xd3, 0xb5, 0x69, 0x0d, 0xa0, 0x5f, 0x66, 0x4d, 0xbb, 0x60, 0x04, 0x0b, 0x47, 0xdc, 0xe5,
|
||||||
0xce, 0xf3, 0x3a, 0x13, 0x9f, 0xe2, 0x16, 0x90, 0x6e, 0x29, 0xbb, 0x83, 0x8f, 0x60, 0x4a, 0xd4,
|
0x24, 0xe6, 0xc4, 0xcb, 0x5f, 0xe5, 0x53, 0xc5, 0x6c, 0x5c, 0xb7, 0x82, 0xca, 0xed, 0x30, 0x0f,
|
||||||
0x03, 0xa7, 0xdc, 0x0b, 0xd5, 0xd6, 0x6e, 0xc8, 0xad, 0x6d, 0x4a, 0x8a, 0x5c, 0xdb, 0x6a, 0xb1,
|
0x55, 0xce, 0xb3, 0x3a, 0x13, 0x47, 0x91, 0x05, 0xa4, 0x6b, 0x4a, 0x73, 0xf0, 0x09, 0x54, 0x89,
|
||||||
0x05, 0x8a, 0xdb, 0x54, 0x3b, 0x5d, 0x10, 0x24, 0x73, 0x05, 0x40, 0xb6, 0x94, 0xea, 0x86, 0x96,
|
0x7a, 0xe0, 0x94, 0xbb, 0x81, 0x5a, 0xf1, 0x35, 0xb9, 0xe2, 0x4d, 0x09, 0x91, 0x3b, 0x5e, 0x6d,
|
||||||
0xd2, 0x15, 0x94, 0x2d, 0x41, 0xb0, 0x57, 0x61, 0xf9, 0x53, 0xcc, 0xc5, 0xfb, 0x83, 0x6d, 0xd1,
|
0x41, 0x5f, 0x61, 0xeb, 0xea, 0x01, 0x20, 0x00, 0x12, 0xb9, 0x0a, 0x20, 0x5b, 0x4a, 0x75, 0x43,
|
||||||
0x78, 0x48, 0x46, 0x29, 0xf3, 0xb4, 0xab, 0xb0, 0x7f, 0x30, 0x60, 0xe5, 0x1c, 0x81, 0x2c, 0x60,
|
0x43, 0xf1, 0x0a, 0xc8, 0x8e, 0x00, 0x58, 0x6b, 0xb0, 0xf2, 0x6f, 0xcc, 0xc5, 0x63, 0x85, 0xed,
|
||||||
0x0b, 0xda, 0x91, 0x97, 0x70, 0xcc, 0xf2, 0x2e, 0xc9, 0x8f, 0xd3, 0xa9, 0xa8, 0x5d, 0x96, 0x8a,
|
0xd0, 0x68, 0x48, 0x46, 0x09, 0x73, 0xb5, 0x54, 0x58, 0xdf, 0x18, 0xb0, 0x7a, 0x09, 0x41, 0xea,
|
||||||
0x7a, 0x25, 0x15, 0xd7, 0xa1, 0x15, 0x79, 0x67, 0x6e, 0x74, 0x98, 0x3d, 0x30, 0x9a, 0x91, 0x77,
|
0x70, 0x1f, 0x9a, 0xa1, 0x1b, 0x73, 0xcc, 0xb2, 0x2e, 0xc9, 0xae, 0xd3, 0xa1, 0xa8, 0x5c, 0x17,
|
||||||
0xf6, 0xe2, 0xf0, 0xe1, 0xfb, 0x36, 0xf4, 0x0e, 0xb0, 0x77, 0x8a, 0x71, 0x20, 0x1d, 0x43, 0xa3,
|
0x8a, 0x6a, 0x29, 0x14, 0x4b, 0xd0, 0x08, 0xdd, 0x0b, 0x27, 0x3c, 0x4e, 0x5f, 0x23, 0xf5, 0xd0,
|
||||||
0xbc, 0x21, 0xca, 0x3f, 0xd5, 0xd0, 0x9d, 0xe9, 0xca, 0x9f, 0xf9, 0xdb, 0x70, 0x70, 0xf7, 0x32,
|
0xbd, 0x78, 0x79, 0xbc, 0xfd, 0x4b, 0x13, 0x3a, 0x47, 0xd8, 0x3d, 0xc7, 0xd8, 0x97, 0x86, 0xa1,
|
||||||
0xb1, 0xac, 0xb6, 0xae, 0xa0, 0x7d, 0xe8, 0x6a, 0xbf, 0x85, 0xd0, 0xb2, 0xa6, 0x58, 0xf9, 0x89,
|
0x51, 0xd6, 0x10, 0xc5, 0x6f, 0x3a, 0x74, 0x6f, 0xba, 0xf2, 0x67, 0x7e, 0x44, 0x0e, 0xee, 0x5f,
|
||||||
0x37, 0x58, 0x39, 0x87, 0xab, 0xa3, 0x69, 0x3b, 0x5d, 0x47, 0xab, 0xbe, 0x22, 0x74, 0xb4, 0x59,
|
0x47, 0x96, 0xd6, 0xd6, 0x1c, 0x7a, 0x05, 0x6d, 0xed, 0xa3, 0x09, 0xad, 0x68, 0x8c, 0xa5, 0x6f,
|
||||||
0x0f, 0x01, 0x89, 0xa6, 0xed, 0x6b, 0x1d, 0xad, 0xfa, 0x42, 0xd0, 0xd1, 0x66, 0x2d, 0x79, 0x89,
|
0xc1, 0xc1, 0xea, 0x25, 0xd8, 0x4c, 0xda, 0x23, 0x03, 0x1d, 0x42, 0x5b, 0xdb, 0xf5, 0xba, 0xbc,
|
||||||
0xa6, 0xad, 0x47, 0x1d, 0xad, 0xba, 0xfc, 0x75, 0xb4, 0x59, 0x3b, 0xf5, 0x0a, 0xfa, 0x1a, 0xe6,
|
0xf2, 0xa3, 0x43, 0x97, 0x37, 0xe3, 0x81, 0x60, 0xcd, 0x09, 0x69, 0xda, 0xc6, 0xd6, 0xa5, 0x95,
|
||||||
0x2b, 0x8b, 0x0b, 0xd9, 0x13, 0xad, 0xf3, 0x36, 0xee, 0x60, 0xfd, 0x42, 0x99, 0x02, 0xff, 0x15,
|
0xdf, 0x08, 0xba, 0xb4, 0x59, 0x6b, 0x5e, 0x4a, 0xd3, 0x16, 0xa4, 0x2e, 0xad, 0xbc, 0xfe, 0x75,
|
||||||
0xf4, 0xf4, 0x85, 0x82, 0x34, 0x87, 0x66, 0xac, 0xc4, 0xc1, 0xea, 0x79, 0x6c, 0x1d, 0x50, 0x9f,
|
0x69, 0xb3, 0xb6, 0xea, 0x1c, 0xfa, 0x1c, 0x16, 0x4a, 0xab, 0x0b, 0x59, 0x13, 0xae, 0xcb, 0x76,
|
||||||
0x95, 0x3a, 0xe0, 0x8c, 0x6d, 0xa1, 0x03, 0xce, 0x1a, 0xb1, 0xf6, 0x15, 0xf4, 0x15, 0xcc, 0x4d,
|
0xee, 0x60, 0xe3, 0x4a, 0x9a, 0x5c, 0xfe, 0x6b, 0xe8, 0xe8, 0x2b, 0x05, 0x69, 0x06, 0xcd, 0x58,
|
||||||
0xcf, 0x2c, 0x74, 0x7b, 0x3a, 0x6d, 0x95, 0x51, 0x38, 0xb0, 0x2f, 0x12, 0x29, 0xc0, 0xf7, 0x00,
|
0x8a, 0x83, 0xb5, 0xcb, 0xd0, 0xba, 0x40, 0x7d, 0x5a, 0xea, 0x02, 0x67, 0xec, 0x0b, 0x5d, 0xe0,
|
||||||
0x26, 0xa3, 0x08, 0x2d, 0x4d, 0x74, 0x2a, 0xa3, 0x70, 0xb0, 0x3c, 0x9b, 0x59, 0x40, 0x7d, 0x03,
|
0xac, 0x21, 0x6b, 0xcd, 0xa1, 0xcf, 0x60, 0x7e, 0x7a, 0x6a, 0xa1, 0xbb, 0xd3, 0x61, 0x2b, 0x0d,
|
||||||
0xd7, 0x67, 0xf6, 0x3b, 0xd2, 0x9a, 0xe4, 0xa2, 0x89, 0x31, 0xf8, 0xd7, 0xa5, 0x72, 0xb9, 0xad,
|
0xc3, 0x81, 0x75, 0x15, 0x49, 0x2e, 0xfc, 0x00, 0x60, 0x32, 0x8c, 0xd0, 0xf2, 0x84, 0xa7, 0x34,
|
||||||
0xe7, 0xab, 0x30, 0x97, 0xa8, 0x36, 0x1e, 0x26, 0x9b, 0x7e, 0x48, 0x70, 0xcc, 0x9f, 0x83, 0xd4,
|
0x0c, 0x07, 0x2b, 0xb3, 0x91, 0xb9, 0xa8, 0x2f, 0x60, 0x69, 0x66, 0xc7, 0x23, 0xad, 0x4d, 0xae,
|
||||||
0x78, 0xcd, 0x28, 0xa7, 0x87, 0x2d, 0xf9, 0x1f, 0xcf, 0x7f, 0xff, 0x0c, 0x00, 0x00, 0xff, 0xff,
|
0x9a, 0x19, 0x83, 0xbf, 0x5c, 0x4b, 0x97, 0xe9, 0x7a, 0xb1, 0x06, 0xf3, 0xb1, 0x6a, 0xe4, 0x61,
|
||||||
0x0e, 0xa9, 0xb5, 0x68, 0xf2, 0x11, 0x00, 0x00,
|
0xbc, 0xe5, 0x05, 0x04, 0x47, 0xfc, 0x05, 0x48, 0x8e, 0x37, 0x8c, 0x72, 0x7a, 0xdc, 0x90, 0xbf,
|
||||||
|
0x83, 0xfe, 0xfe, 0x7b, 0x00, 0x00, 0x00, 0xff, 0xff, 0xc5, 0xce, 0x15, 0x02, 0x1d, 0x12, 0x00,
|
||||||
|
0x00,
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,9 @@
|
|||||||
package filer_pb
|
package filer_pb
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
"github.com/chrislusf/seaweedfs/weed/storage/needle"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -67,3 +70,14 @@ func AfterEntryDeserialization(chunks []*FileChunk) {
|
|||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func CreateEntry(ctx context.Context, client SeaweedFilerClient, request *CreateEntryRequest) error {
|
||||||
|
resp, err := client.CreateEntry(ctx, request)
|
||||||
|
if err == nil && resp.Error != "" {
|
||||||
|
return fmt.Errorf("CreateEntry: %v", resp.Error)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("CreateEntry: %v", err)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user