- 05 Jul, 2002 1 commit
-
-
James Simmons authored
into heisenberg.transvirtual.com:/tmp/linux-input
-
- 04 Jul, 2002 39 commits
-
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Alexander Viro authored
* ->i_dev followed the example of ->s_dev - it's dev_t now. All remaining uses of ->i_dev either outright want dev_t (stat()) or couldn't care less (printing major:minor in /proc/<pid>/maps, etc.)
-
Alexander Viro authored
* JFS uses its ->logdev only twice - one of the places assigns it to_kdev_t(le32_to_cpu(...)), another uses kdev_t_to_nr() of it. Switched to u32 - it's just a place where we store device number we'd got from superblock. * several reiserfs_fs.h function prototypes removed - functions in question don't exist anymore. * smbfs doesn't support device nodes; ->f_rdev removed.
-
Alexander Viro authored
* svc_export ->ex_dev turned into dev_t. It's a pure search key and all places that set it actually do to_kdev_t(some_dev_t_expression).
-
Alexander Viro authored
* ->dev killed for md/linear.c (same as previous parts)
-
Alexander Viro authored
* md_import_device() returns resulting rdev or ERR_PTR(error) instead of returning 0 or error an letting caller find rdev.
-
Alexander Viro authored
* a bunch of callers of partition_name() are calling bdev_partition_name(), * the last users of raid1 and multipath ->dev are gone; so are the fields in question.
-
Alexander Viro authored
* ->diskop() split into individual methods; prototypes cleaned up. In particular, handling of hot_add_disk() gets mdk_rdev_t * of the component we are adding as an argument instead of playing the games with major/minor. Code cleaned up.
-
Alexander Viro authored
* ->error_handler() switched to struct block_device *. * md_sync_acct() switched to struct block_device *. * raid5 struct disk_info ->dev is gone - we use ->bdev everywhere. * bunch of kdev_same() when we have corresponding struct block_device * and can simply compare them is removed from drivers/md/*.c
-
Alexander Viro authored
* since the last caller of is_read_only() is gone, the function itself is removed. * destroy_buffers() is not used anymore; gone. * fsync_dev() is gone; the only user is (broken) lvm.c and first step in fixing lvm.c will consist of propagating struct block_device * anyway; at that point we'll just use fsync_bdev() in there. * prototype of bio_ioctl() removed - function doesn't exist anymore.
-
Alexander Viro authored
* Bunch of functions in cdrom.c used to get kdev_t and use it only to do cdrom_find_device(dev), even though their callers already had struct cdrom_device_info * in question. Switched to passing said pointer directly. * useless exports removed; stuff not used outside of cdrom.c made static.
-
Alexander Viro authored
* calc_dev_sboffset() and calc_dev_size() in md.c are getting mk_rdev_t instead of kdev_t. Callers updated. * calls of blkdev_size_in_bytes() in md.c replaced with use of rdev->bdev->bd_inode->i_size.
-
Alexander Viro authored
* devpts "upcalls" eliminated. * instead of playing games with revalidation we simply use ramfs-style tree and kill dentries upon devpts_pty_kill(). That allows to get rid of a lot of code in fs/devpts/*.c. * devpts_fs.h cleaned up. * devpts/root.c and devpts/devpts_i.h removed. * array of pointers to devpts inodes killed; with ramfs-style tree it's not needed anymore. * devpts/inode.c cleaned up. * devpts_pty_new() used to get mk_kdev() only to convert it to dev_t (hardly a surprise, since it's mknod() in disguise). Now it gets dev_t as an argument.
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Andrew Morton authored
This is Bill Irwin's cleanup patch which gives symbolic names to the fault types: #define VM_FAULT_OOM (-1) #define VM_FAULT_SIGBUS 0 #define VM_FAULT_MINOR 1 #define VM_FAULT_MAJOR 2 Only arch/i386 has been updated - other architectures can do this too.
-
Andrew Morton authored
The blockdev mapping's private_lock is fairly contended. The buffer LRU cache fixed a lot of that, but under page replacement load, try_to_free_buffers is still showing up. Moving the freeing of buffer_heads outside the lock reduces contention in there by 30%.
-
Andrew Morton authored
Add a BUG() check to __free_pages_ok() - to catch someone freeing a page which has a non-zero refcount. Actually, this check is mainly to catch someone (ie: shrink_cache()) incrementing a page's refcount shortly after it has been freed Also clean up __free_pages_ok() a bit and convert lots of BUGs to BUG_ON.
-
Andrew Morton authored
Fix a buglet in invalidate_list_pages2(): there is a small window in which writeback could start against the page before this function locks it. The patch closes the race by performing the PageWriteback test inside PageLocked. Testing PageWriteback inside PageLocked is "definitive" - when a page is locked, writeback cannot start against it.
-
Andrew Morton authored
This is a patch which Stephen has applied to ext3's 2.4 repository. Originally written by Andreas, generalised somewhat by Stephen. Add jbd callback mechanism, requested for InterMezzo. We allow the jbd's client to request notification when a given handle's IO finally commits to disk, so that clients can manage their own writeback state asynchronously.
-
Andrew Morton authored
Forward-port of a fix which Stephen has applied to ext3's 2.4 CVS tree. Fix for a rare problem seen under stress in data=journal mode: if we have to restart a truncate transaction while traversing the inode's direct blocks, we need to deal with bh==NULL in ext3_clear_blocks.
-
Andrew Morton authored
generic_writepages and mpage_writepages are basically identical, except one calls ->writepage() and the other calls mpage_writepage(). This duplication is irritating. The patch folds generic_writepage() into mpage_writepages(). It does this rather kludgily: if the get_block argument to mpage_writepages() is NULL then use ->writepage(). Can't think of a better way, really - we could go for a fully-blown write_actor_t thing, but that would be overly elaborate and would not allow mpage_writepage() to be inlined inside mpage_writepages(), which is rather desirable.
-
Andrew Morton authored
Fixes a bug in generic_writepages() and its cut-n-paste-cousin, mpage_writepages(). The code was clearing PageDirty and then baling out if it discovered the page was nder writeback. Which would cause the dirty bit to be lost. It's a very small window, but reversing the order so PageDirty is only cleared when we know for-sure that IO will be started fixes it up.
-
Andrew Morton authored
The `page allocation failure' warning in __alloc_pages() is being a pain. But I'm persisting with it... The patch renames PF_RADIX_TREE to PF_NOWARN, and uses it in a few places where allocations failures are known to happen. These code paths are well-tested now and suppressing the warning is OK.
-
Andrew Morton authored
move_from_swap_cache() and move_to_swap_cache() are playing with page->flags nonatomically. The page is on the LRU at the time and another CPU could be altering page->flags concurrently. The patch converts those functions to use atomic operations. It also rationalises the number of bits which are cleared. It's not really clear to me what page flags we really want to set to a known state in there. It had no right to go clearing PG_arch_1. I'm now clearing PG_arch_1 inside rmqueue() which is still a bit presumptious. btw: shmem uses PAGE_CACHE_SIZE and swapper_space uses PAGE_SIZE. I've been carefully maintaining the distinction, but it looks like shmem will break if we ever do make these values different. Also, __add_to_page_cache() was performing a non-atomic RMW against page->flags, under the assumption that it was a newly allocated page which no other CPU would look at. Not true - this function is used for moving anon pages into swapcache. Those anon pages are on the LRU - other CPUs can be performing operations against page->flags while __add_to_swap_cache is stomping on them. This had me running around in circles for two days. So let's move the initialisation of the page state into rmqueue(), where the page really is new (could do it in page_cache_alloc, perhaps). The SetPageLocked() in __add_to_page_cache() is also rather curious. Seems OK for both pagecache and swapcache so I covered that with a comment. 2.4 has the same problem. Basically, add_to_swap_cache() can stomp on another CPU's manipulation of page->flags. After a quick review of the code there, it is barely conceivable that a concurrent refill_inactve() could get its PG_referenced and PG_active bits scribbled on. Rather unlikely because swap_out() will probably see PageActive() and bale out. Also, mark_dirty_kiobuf() could have its PG_dirty bit accidentally cleared (but try_to_swap_out() sets it again later). But there may be other code paths. Really, I think this needs fixing in 2.4 - it's horrid.
-
Andrew Morton authored
In mpage_writepage(), use __GFP_HIGH when allocating the BIO: writeback is a memory reclaim function and is entitle to dip into the page reserves to get its IO underway.
-
Andrew Morton authored
This patch reinstates __GFP_HIGH functionality. __GFP_HIGH means "able to dip into the emergency pools". However, somewhere along the line this got broken. __GFP_HIGH ceased to do anything. Instead, !__GFP_WAIT is used to tell the page allocator to try harder. __GFP_HIGH makes sense. The concepts of "unable to sleep" and "should try harder" are quite separate, and overloading !__GFP_WAIT to mean "should access emergency pools" seems wrong. This patch fixes a problem in mempool_alloc(). mempool_alloc() tries the first allocation with __GFP_WAIT cleared. If that fails, it tries again with __GFP_WAIT enabled (if the caller can support __GFP_WAIT). So it is currently performing an atomic allocation first, even though the caller said that they're prepared to go in and call the page stealer. I thought this was a mempool bug, but Ingo said: > no, it's not GFP_ATOMIC. The important difference is __GFP_HIGH, which > triggers the intrusive highprio allocation mode. Otherwise gfp_nowait is > just a nonblocking allocation of the same type as the original gfp_mask. > ... > what i've added is a bit more subtle allocation method, with both > performance and balancing-correctness in mind: > > 1. allocate via gfp_mask, but nonblocking > 2. if failure => try to get from the pool if the pool is 'full enough'. > 3. if failure => allocate with gfp_mask [which might block] > > there is performance data that this method improves bounce-IO performance > significantly, because even under VM pressure (when gfp_mask would block) > we can still use up to 50% of the memory pool without blocking (and > without endangering deadlock-free allocation). Ie. the memory pool is also > a fast 'frontside cache' of memory elements. Ingo was assuming that __GFP_HIGH was still functional. It isn't, and the mempool design wants it.
-
Andrew Morton authored
Yet another SetPageDirty/set_page_dirty bugfix: mark_dirty_kiobuf needs to run set_page_dirty() so the page goes onto its mapping's dirty_pages list.
-
Andrew Morton authored
For O_DIRECT opens we're currently checking that the fs supports O_DIRECT at write(2)-time. This is a forward-port of Andrea's patch which moves the check to open() time. Seems more sensible.
-
Andrew Morton authored
It seems that the yield() macro requires state TASK_RUNNING, but practically none of the callers remember to do that. The patch turns yield() into a real function which sets state TASK_RUNNING before scheduling.
-
Andrew Morton authored
do_select() does set_current_state(TASK_INTERRUPTIBLE) then calls __pollwait() which calls __get_free_page() and the cond_resched() which I added to the pagecache reclaim code never returns. The patch makes cond_resched() more useful by setting current->state to TASK_RUNNING before scheduling.
-
Andrew Morton authored
A little cleanup: Most callers of list_splice() immediately reinitialise the source list_head after calling list_splice(). So create a new list_splice_init() which does all that.
-
Andrew Morton authored
A shmem cleanup/bugfix patch from Hugh Dickins. - Minor: in try_to_unuse(), only wait on writeout if we actually started new writeout. Otherwise, there is no need because a wait_on_page_writeback() has already been executed against this page. And it's locked, so no new writeback can start. - Minor: in shmem_unuse_inode(): remove all the wait_on_page_writeback() logic. We already did that in try_to_unuse(), adn the page is locked so no new writeback can start. - Less minor: add a missing a page_cache_release() to shmem_get_page_locked() in the uncommon case where the page was found to be under writeout.
-
Andrew Morton authored
Patch from Christoph Hellwig removes swap_get_block(). I was sort-of hanging onto this function because it is a standard get_block function, and maybe perhaps it could be used to make swap use the regular filesystem I/O functions. We don't want to do that, so kill it.
-
Andrew Morton authored
Writeback/pdflush cleanup patch from Steven Augart * Exposes nr_pdflush_threads as /proc/sys/vm/nr_pdflush_threads, read-only. (I like this - I expect that management of the pdflush thread pool will be important for many-spindle machines, and this is a neat way of getting at the info). * Adds minimum and maximum checking to the five writable pdflush and fs-writeback parameters. * Minor indentation fix in sysctl.c * mm/pdflush.c now includes linux/writeback.h, which prototypes pdflush_operation. This is so that the compiler can automatically check that the prototype matches the definition. * Adds a few comments to existing code.
-
Andrew Morton authored
- Comment and documentation fixlets - Remove some unneeded fields from swapper_inode (these are a leftover from when I had swap using the filesystem IO functions). - fix a printk bug in pci/pool.c: when dma_addr_t is 64 bit it generates a compile warning, and will print out garbage. Cast it to unsigned long long. - Convert some writeback #defines into enums (Steven Augart)
-
Andrew Morton authored
Having just fiddled with the refcounts of blockdev buffers, I want some way of assuring that the code is correct and is not leaking buffer_heads. There's no easy way to do this: if a blockdev page has pinned buffers then truncate_complete_page just cuts it loose and we leak memory. The patch adds a bit of debug code to catch these leaks. This code, PF_RADIX_TREE and buffer_error() need to be removed later on.
-
Andrew Morton authored
Removes ext3's open-coded inode and allocation bitmap LRUs. This patch includes a cleanup to ext3_new_block(). The local variables `bh', `bh2', `i', `j', `k' and `tmp' have been renamed to something more palatable.
-
Andrew Morton authored
Remove ext2's open-coded bitmap LRUs. Core kernel does this for it now.
-
Andrew Morton authored
ext2 and ext3 implement a custom LRU cache of buffer_heads - the eight most-recently-used inode bitmap buffers and the eight MRU block bitmap buffers. I don't like them, for a number of reasons: - The code is duplicated between filesystems - The functionality is unavailable to other filesystems - The LRU only applies to bitmap buffers. And not, say, indirects. - The LRUs are subtly dependent upon lock_super() for protection: without lock_super protection a bitmap could be evicted and freed while in use. And removing this dependence on lock_super() gets us one step on the way toward getting that semaphore out of the ext2 block allocator - it causes significant contention under some loads and should be a spinlock. - The LRUs pin 64 kbytes per mounted filesystem. Now, we could just delete those LRUs and rely on the VM to manage the memory. But that would introduce significant lock contention in __find_get_block - the blockdev mapping's private_lock and page_lock are heavily used. So this patch introduces a transparent per-CPU bh lru which is hidden inside __find_get_block(), __getblk() and __bread(). It is designed to shorten code paths and to reduce lock contention. It uses a seven-slot LRU. It achieves a 99% hit rate in `dbench 64'. It provides benefit to all filesystems. The next patches remove the open-coded LRUs from ext2 and ext3. Taken together, these patches are a code cleanup (300-400 lines gone), and they reduce lock contention. Anton tested these patches on the 32-way and demonstrated a throughput improvement of up to 15% on RAM-only dbench runs. See http://samba.org/~anton/linux/2.5.24/dbench/ Most of this benefit is from avoiding find_get_page() on the blockdev mapping. Because the generic LRU copes with indirect blocks as well as bitmaps.
-