Commit Graph

59 Commits

Author SHA1 Message Date
chrislusf
320e946d50 fix ttl change detection
https://github.com/chrislusf/seaweedfs/issues/166
2015-07-17 19:30:25 -07:00
chrislusf
430f371a97 fix wrong logic 2015-07-11 12:20:39 -07:00
chrislusf
2c595d2d16 skip isFileUnchanged checking since ttl always change
skip checking since ttl always change. Fixing
https://github.com/chrislusf/seaweedfs/issues/166
2015-07-10 09:43:49 -07:00
chrislusf
86cd40fba8 Add "weed backup" command.
This is a pre-cursor for asynchronous replication.
2015-05-26 00:58:41 -07:00
chrislusf
8f88d382a5 Rename variables 2015-05-23 10:16:01 -07:00
chrislusf
b8314fb054 Textual changes. 2015-05-08 23:44:11 -07:00
Stuart P. Bentley
f0c2a2dcb3 Change all chrislusf/weed-fs links to point to chrislu/seaweedfs 2015-04-16 19:18:06 +00:00
chrislusf
49d639ecab Add error checking for file reads. 2015-04-14 23:05:33 -07:00
chrislusf
1b6ab2f6af Add boltdb for volume needle map
boltdb is fairly slow to write, about 6 minutes for recreating index
for 1553934 files. Boltdb loads 1,553,934 x 16 = 24,862,944bytes from
disk, and generate the boltdb as large as 134,217,728 bytes in 6
minutes.

To compare, for leveldb, it recreates index in leveldb as large as
27,188,148 bytes in 8 seconds.
For in memory version, it loads the index in

To test the memory consumption, the leveldb or boltdb index are
created. And the server is restarted. Using the benchmark tool to read
lots of files. There are 7 volumes in benchmark collection, each with
about 1553K files.
For leveldb, the memory starts at 142,884KB, and stays at 179,340KB.
For boltdb, the memory starts at 73,756KB, and stays at 144,564KB.
For in-memory, the memory starts at 368,152KB, and stays at 448,032KB.
2015-03-29 11:04:32 -07:00
chrislusf
020ba6c9a8 add leveldb support for needle map
This supposedly should reduce memory consumption. However, for tests
with millions of, this shows consuming more memories. Need to see
whether this will work out. If not, later boltdb will be tested.
2015-03-27 16:34:58 -07:00
chrislusf
d48d76cb4f adding special handling to recover data if possible
For bug #87 and #93, add special handling to recover data if possible.
2015-03-09 01:10:04 -07:00
Chris Lu
41bd5179f3 Resolve Conflicts 2015-01-14 09:56:13 -08:00
Chris Lu
af416189f1 Cleanup error printing. 2015-01-13 17:04:41 -08:00
Lei Xue
029e3a3822 fix some typos 2015-01-13 18:46:56 +08:00
yanyiwu
5a40f539f2 fix bug: upload a file which already existed return a wrong file size. 2014-12-26 15:36:33 +08:00
Chris Lu
179d36ba0e formatting code by: goimports -w=true . 2014-10-26 11:34:55 -07:00
Chris Lu
e9a8999f63 print error the correct way. 2014-10-21 01:27:06 -07:00
wyy
4126280d55 use github.com/chrislusf instead of github.com/aszxqw 2014-09-25 16:57:22 +08:00
wyy
1cd19447e3 use github.com/aszxqw instead of code.google.com/p 2014-09-25 00:47:09 +08:00
Chris Lu
b9aee2defb add TTL support
The volume TTL and file TTL are not necessarily the same. as long as
file TTL is smaller than volume TTL, it'll be fine.

volume TTL is used when assigning file id, e.g.
http://.../dir/assign?ttl=3h

file TTL is used when uploading
2014-09-20 12:38:59 -07:00
Chris Lu
4c58cef24a a bit refactoring to prepare for volume format change and backward
compatibility.
2014-08-25 11:37:00 -07:00
Chris Lu
4b7b439be9 Reduce memory usage for "weed fix" 2014-05-31 17:10:51 -07:00
Chris Lu
fe3f06435e Refactor out volume vacuum. 2014-05-19 20:54:39 -07:00
Chris Lu
e7aaa24da8 Refactor out volume vacuum. 2014-05-19 19:24:35 -07:00
Chris Lu
3b5035c468 1. v0.54
2. go vet found many printing format errors
2014-04-17 00:16:44 -07:00
Chris Lu
a0955aa4dd refactor functions 2014-03-23 21:57:10 -07:00
Chris Lu
0563773944 switch to ReadAt() for thread-safe read
fix bugs during volume compaction
2014-03-19 04:48:13 -07:00
Chris Lu
af32b52727 1. no locks for all read operations! Switching to pread for all reads.
2. prevent heartbeat lost when vacuuming, by removing locks on Size()
function
2014-03-18 23:48:01 -07:00
Chris Lu
cd10c277b2 can now delete a collection! Is this a dangerous feature? Only enabling
deleting "benchmark" collections for now.
2014-03-10 11:43:54 -07:00
Chris Lu
e6e85a6b2c truncate file content during creating 2014-03-09 18:50:09 -07:00
Chris Lu
27c74a7e66 Major:
change replication_type to ReplicaPlacement, hopefully cleaner code
works for 9 possible ReplicaPlacement
xyz
x : number of copies on other data centers
y : number of copies on other racks
z : number of copies on current rack
x y z each can be 0,1,2

Minor:
weed server "-mdir" default to "-dir" if empty
2014-03-02 22:16:54 -08:00
Chris Lu
67125688ed Avoid creating *.dat file when reading and it does not exist 2014-02-06 17:32:06 -08:00
Chris Lu
cda2a6b510 trivial refactoring 2014-01-21 20:51:46 -08:00
Chris Lu
aed74b5568 adjust function name 2013-11-18 15:05:11 -08:00
Chris Lu
3b68711139 support for collections! 2013-11-12 02:21:22 -08:00
Chris Lu
9e9b2c0703 log changes 2013-10-31 12:55:19 -07:00
Chris Lu
3f5f8657d2 add a command to force compaction of a volume, removing deleted files 2013-09-28 22:18:52 -07:00
Chris Lu
69ac6b6bf6 Issue 45 in weed-fs: [Compact issue] Offset overflow
New issue 45 by hieu.hcmus@gmail.com: [Compact issue] Offset overflow
http://code.google.com/p/weed-fs/issues/detail?id=45

You are using uint32(Maximum 4Gb) to store needle offset(Maximum 32Gb)
when compacting.
Currently It is ok if the volume size is < 4gb
Change variable "offset" in ScanVolumeFile function to uint64 to fix the
issue.
2013-09-19 11:06:14 -07:00
Chris Lu
82b74c7940 issue 43 "go fmt" chagnes from "Ryan S. Brown" <sb@ryansb.com>
some basic changes to parse upload url
2013-09-01 23:58:21 -07:00
Chris Lu
078118ecba v0.40 2013-08-12 23:48:10 -07:00
Chris Lu
44c4e74655 correct and more cleaner logic to fall back to read only mode
checking file permissions directly since the try and catch exception
approach does not work consistently as seen in bug #41
2013-08-12 16:53:32 -07:00
Chris Lu
82f6a6838f wording change 2013-08-11 13:15:11 -07:00
Chris Lu
7cef280bdc handle cases when .idx files are also readonly
adjusting log level
2013-08-11 11:38:55 -07:00
Chris Lu
ed154053c8 switching to temporarily use glog library 2013-08-08 23:57:22 -07:00
Chris Lu
8f0b527b28 a little more concise 2013-07-28 22:53:25 -07:00
Chris Lu
81debd73d4 Issue 37: Replicate delete
Reported by hieu.hcmus, Today (24 minutes ago)


What steps will reproduce the problem?
1.Create 2 volumes server same rack, replication type = 001
2.Upload a file
3.Delete file

What is the expected output? What do you see instead?
Expected output: File is deleted in both volume server
But: file is only deleted in one volume server

What version of the product are you using? On what operating system?
0.36
Please provide any additional information below.

After remove NeedleValue from NeedleMap, the size = 0 and it causes the
error.

I uploaded the patch to fix this error
2013-07-28 22:49:17 -07:00
Chris Lu
ac15868694 clean up log fmt usage. Move to log for important data changes,
warnings.
2013-07-13 19:44:24 -07:00
Chris Lu
1165632fa0 use bytes.Equal() instead, Thanks for Thomas' suggestion 2013-07-13 13:51:47 -07:00
Chris Lu
4c280bc317 ensure append only for deleted files 2013-07-12 00:55:21 -07:00
Chris Lu
b87ec11c1c empty deleted file 2013-07-11 23:38:44 -07:00