- 12 Aug, 2002 1 commit
-
-
http://linux-ntfs.bkbits.net/ntfs-tng-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 13 Aug, 2002 1 commit
-
-
Anton Altaparmakov authored
- Unlock the page in an out of memory error code path in fs/ntfs/aops.c::ntfs_read_block(). - If fs/ntfs/aops.c::ntfs_read_page() is called on an uptodate page, just unlock the page and return. (This can happen due to ->writepage clearing PageUptodate() during write out of MstProtected() attributes. - Remove leaked write code again.
-
- 12 Aug, 2002 20 commits
-
-
Rik van Riel authored
The following patch corrects a bug where rmap would continue trying to swap out a page even after it failed on one pte, which could result in leaked pte chains and a bug when exiting applications which use mlock(). The bug was tracked down by Christian Ehrhardt, the reason it wasn't found earlier was a subtlety in the code, so I've taken the liberty of changing Christian's patch into something more explicit, we shouldn't let this one happen again ;)
-
Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Alexander Viro authored
Misc. compile fixes, xd.c switched to per-disk gendisks, Alan's 2.4 fixes for xd.c ported.
-
Alexander Viro authored
DAC960 switched to per-disk gendisks.
-
Linus Torvalds authored
-
Linus Torvalds authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
into nuts.ninka.net:/home/davem/src/BK/net-2.5
-
Arnaldo Carvalho de Melo authored
The s/at_addr/atalk_addr/ in atalk.h broke the compilation of drivers/net/appletalk/*. The patch below fixes it. From Adrian Bunk.
-
Ingo Molnar authored
3 TLS entries, 9 cycles copying and no branches in the context-switch path. The patch also adds Christoph's suggestion and renames modify_ldt_ldt_s (yuck!) to user_desc.
-
David S. Miller authored
-
Steven Whitehouse authored
-
- 11 Aug, 2002 8 commits
-
-
David S. Miller authored
into nuts.ninka.net:/home/davem/src/BK/net-2.5
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
Pete Zaitcev authored
-
Pete Zaitcev authored
- page-size PTE directory with 16-word pmd_t as suggested by RMK and Riel - support for 2.5.x softirq infrastructure - other miscellanea
-
David S. Miller authored
into nuts.ninka.net:/home/davem/src/BK/net-2.5
-
- 10 Aug, 2002 10 commits
-
-
Arnaldo Carvalho de Melo authored
-
Linus Torvalds authored
-
Andrew Morton authored
Well the optimum solution there would be to create and use `inc_preempt_count_non_preempt()'. I don't see any way of embedding this in kmap_atomic() or copy_to_user_atomic() without loss of flexibility or incurring a double-inc somewhere.
-
bk://linuxusb.bkbits.net/pci_hp-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
bk://ppc.bkbits.net/for-linus-ppcLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Andrew Morton authored
Fix a race between set_page_dirty() and truncate. The page could have been removed from the mapping while this CPU is spinning on the lock. __free_pages_ok() will go BUG. This has not been observed in practice - most callers of set_page_dirty() hold the page lock which gives exclusion from truncate. But zap_pte_range() does not. A fix for this has been sent to Marcelo also.
-
Andrew Morton authored
Some direct IO fixes from Badari Pulavarty. - off-by-one in the bounds checking in blkdev_get_blocks(). - When adding more blocks into a bio_vec, account for the current offset into that bio_vec. - Fix a total ballsup in the code which calculates the total number of pages which are about to be put under IO.
-
Andrew Morton authored
Forward port of get_user_pages() change from 2.4. - If the vma is marked as VM_IO area then fail the map. This prevents kernel deadlocks which occur when applications which have frame buffers mapped try to dump core. Also prevents a kernel oops when a debugger is attached to a process which has an IO mmap. - Check that the mapped page is inside mem_map[] (pfn_valid). - inline follow_page() and remove the preempt_disable()s. It has only a single callsite and is called under spinloclk.
-
Andrew Morton authored
The patch from Stephen Tweedie allows users to modify the journal commit interval for the ext3 filesystem. The commit interval is normally five seconds. For portable computers with spun-down drives it is advantageous to be able to increase the commit interval. There may also be advantages in decreasing the commit interval for specialised applications such as heavily-loaded NFS servers which are using synchronous exports. The laptop users will also need to increase the pdflush periodic writeback interval (/proc/sys/vm/dirty_writeback_centisecs), because the `kupdate' activity also forces a commit. To specify the commit interval, use mount -o commit=30 /dev/hda1 /mnt/whatever or mount -o remount,commit=30 /dev/hda1 The commit interval is specified in units of seconds.
-
Andrew Morton authored
This is the first of three patches which reduce the amount of kmap/kunmap traffic on highmem machines. The workload which was tested was RAM-only dbench. This is dominated by copy_*_user() costs. The three patches speed up my 4xPIII by 3% The three patches speed up a 16P NUMA-Q by 100 to 150% The first two patches (copy_strings and pagecache reads) speed up an 8-way by 15%. I expect that all three patches will speed up the 8-way by 40%. Some of the benefit is from reduced pressure on kmap_lock. Most of it is from reducing the number of global TLB invalidations. This patch fixes up copy_strings(). copy_strings does a huge amount of kmapping. Martin Bligh has noted that across a kernel compile this function is the second or third largest user of kmaps in the kernel. The fix is pretty simple: just hang onto the previous kmap as we we go around the loop. It reduces the number of kmappings from copy_strings by a factor of 30.
-