* fix: disallow file name too long when writing a file
* bool LongerName to MaxFilenameLength
---------
Co-authored-by: Konstantin Lebedev <9497591+kmlebedev@users.noreply.github.co>
An etcd cluster is not necessarily only dedicated to seaweedfs.
This security enhancement adds a customizable key_prefix option to the etcd filer store.
This will allow an etcd cluster administrator to limit the seaweedfs etcd user to only read/write a subset of keys under the
key_prefix, instead of all keys on the etcd cluster.
* compare chunks by timestamp
* fix slab clearing error
* fix test compilation
* move oldest chunk to sealed, instead of by fullness
* lock on fh.entryViewCache
* remove verbose logs
* revert slat clearing
* less logs
* less logs
* track write and read by timestamp
* remove useless logic
* add entry lock on file handle release
* use mem chunk only, swap file chunk has problems
* comment out code that maybe used later
* add debug mode to compare data read and write
* more efficient readResolvedChunks with linked list
* small optimization
* fix test compilation
* minor fix on writer
* add SeparateGarbageChunks
* group chunks into sections
* turn off debug mode
* fix tests
* fix tests
* tmp enable swap file chunk
* Revert "tmp enable swap file chunk"
This reverts commit 985137ec47.
* simple refactoring
* simple refactoring
* do not re-use swap file chunk. Sealed chunks should not be re-used.
* comment out debugging facilities
* either mem chunk or swap file chunk is fine now
* remove orderedMutex as *semaphore.Weighted
not found impactful
* optimize size calculation for changing large files
* optimize performance to avoid going through the long list of chunks
* still problems with swap file chunk
* rename
* tiny optimization
* swap file chunk save only successfully read data
* fix
* enable both mem and swap file chunk
* resolve chunks with range
* rename
* fix chunk interval list
* also change file handle chunk group when adding chunks
* pick in-active chunk with time-decayed counter
* fix compilation
* avoid nil with empty fh.entry
* refactoring
* rename
* rename
* refactor visible intervals to *list.List
* refactor chunkViews to *list.List
* add IntervalList for generic interval list
* change visible interval to use IntervalList in generics
* cahnge chunkViews to *IntervalList[*ChunkView]
* use NewFileChunkSection to create
* rename variables
* refactor
* fix renaming leftover
* renaming
* renaming
* add insert interval
* interval list adds lock
* incrementally add chunks to readers
Fixes:
1. set start and stop offset for the value object
2. clone the value object
3. use pointer instead of copy-by-value when passing to interval.Value
4. use insert interval since adding chunk could be out of order
* fix tests compilation
* fix tests compilation
* refactor(filechunk_manifest): `localProcesed` -> `localProcessed`
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* refactor: `saveChunkedFileIntevalToStorage` -> `saveChunkedFileIntervalToStorage`
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* refactor: `SafeRenewInteval` -> `SafeRenewInterval`
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* refactor: `InitLockInteval` -> `InitLockInterval`
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* refactor: `RenewInteval` -> `RenewInterval`
Signed-off-by: Ryan Russell <git@ryanrussell.org>
Signed-off-by: Ryan Russell <git@ryanrussell.org>
hi, how can I add bucket permission to a user now?
Previously, if I needed to add permission to an existing credential, I simply repeated the s3.configure command with a different bucket name.
Now I am getting error:
duplicate accessKey[ХХХХ], already configured in user[YYYY]
s3.configure -access_key key -actions Read,Write,List -buckets bucket1 -secret_key secr -user user1
s3.configure -access_key key -actions Read,Write,List -buckets bucket2 -secret_key secr -user user1
Sometimes when an unexpected error occurs the cacher would set an
error and return. However, it would not broadcast the condition
signal in that case, therefore leaving the goroutine that runs
readChunkAt stuck forever.
I figured that the condition is unnecessary because readChunkAt is
acquiring a lock that is still held by the cacher goroutine anyway.
Callees of startCaching have to wait for a WaitGroup which makes sure
that readChunkAt can't acquire the lock before startCaching.
This way readChunkAt can execute normally and check for the error.
* Fix FUSE server buffer leaks in file gaps
This change zeros read buffers when encountering file gaps during
file/chunk reads in FUSE mounts.
It prevents leaking internal buffers of the FUSE server which could
otherwise reveal metadata, directory listings, file contents and
other data related to FUSE API calls.
The issue was that buffers are reused, but when a file gap was found
the buffer was not zeroed accordingly and the existing data of the
buffer was kept and returned.
* Move zero logic into its own method