- 12 Nov, 2009 2 commits
-
-
Sage Weil authored
We occasionally want to make a best-effort attempt to invalidate cache pages without fear of blocking. If this fails, we fall back to an async invalidate in another thread. Use invalidate_mapping_pages instead of invalidate_inode_page2, as that will skip locked pages, and not deadlock. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Signed-off-by: Sage Weil <sage@newdream.net>
-
- 11 Nov, 2009 4 commits
-
-
Sage Weil authored
This helps the user know what's going on during the (involved) reconnect process. They already see when the mds fails and reconnect starts. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
It was hidden from sync readdir, but not the cached dcache version. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
We don't get an explicit affirmative confirmation that our caps reconnect, nor do we necessarily want to pay that cost. So, take all this code out for now. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 10 Nov, 2009 1 commit
-
-
Sage Weil authored
We need to make sure we only swab the address during the banner once. So break process_banner out of process_connect, and clean up the surrounding code so that these are distinct phases of the handshake. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 09 Nov, 2009 1 commit
-
-
Sage Weil authored
We were using the cap_gen to track both stale caps (caps that timed out due to temporarily losing touch with the mds) and dead caps that did not reconnect after an MDS failure. Introduce a recon_gen counter to track reconnections to restarted MDSs and kill dead caps based on that instead. Rename gen to cap_gen while we're at it to make it more clear which is which. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 08 Nov, 2009 1 commit
-
-
Sage Weil authored
Make the integer hash function a property of the bucket it is used on. This allows us to gracefully add support for new hash functions without starting from scatch. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 07 Nov, 2009 3 commits
-
-
Sage Weil authored
The object will be hashed to a placement seed (ps) based on the pg_pool's hash function. This allows new hashes to be introduced into an existing object store, or selection of a hash appropriate to the objects that will be stored in a particular pool. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
We were using the (weak) dcache hash function, but it was leaving lower bits consecutive for consecutive (inode) objects. We really want to make the object to pg mapping random and uniform, so use a proper hash function here. This is Robert Jenkin's public domain hash function (with some minor cleanup): http://burtleburtle.net/bob/hash/evahash.html This is a protocol revision. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
These are way to big to be inline. I missed crush/* when doing the inline audit for akpm's review. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 06 Nov, 2009 1 commit
-
-
Sage Weil authored
No ceph prefix. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 05 Nov, 2009 3 commits
-
-
Sage Weil authored
The port is informational only, but we should make it correct. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Use the __le macro, even though for -1 it doesn't matter. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
The endian conversions don't quite work with the old union ceph_pg. Just make it a regular struct, and make each field __le. This is simpler and it has the added bonus of actually working. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 03 Nov, 2009 1 commit
-
-
Sage Weil authored
We exchange struct ceph_entity_addr over the wire and store it on disk. The sockaddr_storage.ss_family field, however, is host endianness. So, fix ss_family endianness to big endian when sending/receiving over the wire. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 02 Nov, 2009 3 commits
-
-
Sage Weil authored
This keeps bdi setup/teardown in line with client life cycle. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Even when we encounter a corrupt bucket. We still BUG(). This fixes the warning fs/ceph/crush/mapper.c: In function 'crush_choose': fs/ceph/crush/mapper.c:352: warning: control may reach end of non-void function 'crush_bucket_choose' being inlined Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Fixes warning fs/ceph/xattr.c: In function '__build_xattrs': fs/ceph/xattr.c:353: warning: 'err' may be used uninitialized in this function Signed-off-by: Sage Weil <sage@newdream.net>
-
- 30 Oct, 2009 1 commit
-
-
Noah Watkins authored
Commit 645a1025 fixes calculation of object offset for layouts with multiple stripes per object. This updates the calculation of the length written to take into account multiple stripes per object. Signed-off-by: Noah Watkins <noah@noahdesu.com> Signed-off-by: Sage Weil <sage@newdream.net>
-
- 29 Oct, 2009 4 commits
-
-
Sage Weil authored
We were incorrectly calculationing of object offset. If we have multiple stripe units per object, we need to shift to the start of the current su in addition to the offset within the su. Also rename bno to ono (object number) to avoid some variable naming confusion. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
The object extent offset is the file offset _modulo_ the stripe unit. The code was correct, the comment was wrong. Reported-by: Noah Watkins <jayhawk@soe.ucsc.edu> Signed-off-by: Sage Weil <sage@newdream.net>
-
Noah Watkins authored
Using stripe unit size calculated and saved on the stack to avoid a redundant call to le32_to_cpu. Signed-off-by: Noah Watkins <noah@noahdesu.com> Signed-off-by: Sage Weil <sage@newdream.net>
-
Noah Watkins authored
Usage of non-list.h list_entry function for container_of functionality replaced with direct use of container_of. Signed-off-by: Noah Watkins <noah@noahdesu.com> Signed-off-by: Sage Weil <sage@newdream.net>
-
- 27 Oct, 2009 4 commits
-
-
Sage Weil authored
This simplifies much of the error handling during mount. It also means that we have the mount args before client creation, and we can initialize based on those options. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Clearly demark int and string argument options, and do not try to convert string arguments to ints. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Since we've increased the max mon count, we shouldn't put the addr array on the parse_mount_args stack. Put it on the heap instead. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 22 Oct, 2009 1 commit
-
-
Sage Weil authored
Get rid of separate max mon limit; use the system limit instead. This allows mounts when there are lots of mon addrs provided by mount.ceph (as with a host with lots of A/AAAA records). Signed-off-by: Sage Weil <sage@newdream.net>
-
- 21 Oct, 2009 1 commit
-
-
Sage Weil authored
We can't fill i_size with rbytes at the fill_file_size stage without adding additional checks for directories. Notably, we want st_blocks to remain 0 on directories so that 'du' still works. Fill in i_blocks, i_size specially in ceph_getattr instead. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 19 Oct, 2009 2 commits
-
-
Sage Weil authored
Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Mix the preferred osd (if any) into the placement seed that is fed into the CRUSH object placement calculation. This prevents all the placement pgs from peering with the same osds. Rev the osd client protocol with this change. Signed-off-by: Sage Weil <sage@newdream.net>
-
- 16 Oct, 2009 5 commits
-
-
Sage Weil authored
Initialized bdi->ra_pages to enable readahead. Use 512KB default. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Cleanup only. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Pass the front_len we need when pulling a message off a msgpool, and WARN if it is greater than the pool's size. Then try to allocate a new message (to continue without failing). Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Defined a struct for the SUBSCRIBE_ACK, and use that to size the msgpool. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Previously we were flushing dirty caps by passing an extra flag when traversing the delayed caps list. Besides being a bit ugly, that can also miss caps that are dirty but didn't result in a cap requeue: notably, mark_caps_dirty(). Separate the flushing into a separate helper, and traverse the cap_dirty list. This also brings i_dirty_item in line with i_dirty_caps: we are on the list IFF caps != 0. We carry an inode ref IFF dirty_caps|flushing_caps != 0. Lose the unused return value from __ceph_mark_caps_dirty(). Signed-off-by: Sage Weil <sage@newdream.net>
-
- 14 Oct, 2009 2 commits
-
-
Sage Weil authored
Both callers of __mark_caps_flushing() do the same work; move it into the helper. Signed-off-by: Sage Weil <sage@newdream.net>
-
Sage Weil authored
Writeback doesn't work without the bdi set, and writeback on umount doesn't work if we unregister the bdi too early. Signed-off-by: Sage Weil <sage@newdream.net>
-