- 04 Apr, 2002 35 commits
-
-
Dave Jones authored
-
Dave Jones authored
Brings QNX4FS back in sync with 2.4
-
Dave Jones authored
Originally from the kernel janitor folks
-
Dave Jones authored
Original from Rene Scharfe This fixes a problem where MSDOS fs's ignore their 'check' mount option.
-
Dave Jones authored
Original fix from Andi Kleen
-
Dave Jones authored
Originally by Anton Altaparmakov. I think Anton is going to submit his rewritten NTFS soon making this null and void, but in the interim, it fixes a known problem with NTFS and large allocations.
-
Dave Jones authored
From the kernel janitor folks
-
Dave Jones authored
From the kernel janitor folks
-
Linus Torvalds authored
-
Linus Torvalds authored
-
bk://bk.arm.linux.org.ukLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Russell King authored
into flint.arm.linux.org.uk:/usr/src/linux-bk-2.5/linux-2.5-rmk
-
http://gkernel.bkbits.net/net-drivers-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
http://gkernel.bkbits.net/fs-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
http://gkernel.bkbits.net/irda-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Linus Torvalds authored
-
Linus Torvalds authored
-
Robert Love authored
The preempt_count debug check that went into 2.5.8-pre1 already caught a simple case in kjournald. Specifically, kjournald does not drop the BKL when it exits as it knows schedule will do so for it. For the sake of clarity and exiting with a preempt_count of zero, the attached patch explicitly calls unlock_kernel when kjournald is exiting.
-
Linus Torvalds authored
-
Eli Kupermann authored
Adding proper print level qualifier to the printk calls.
-
Eli Kupermann authored
Adding missing pci write flush to the procedure e100_exec_cmd
-
Eli Kupermann authored
The patch separates max busy wait constants making in max of 100 usec for wait scb and max of 50 usec for wait cus idle. These constants found sufficient using heavy traffic tests.
-
Andrew Morton authored
Again, we don't need to sync indirects as we dirty them because we run a commit if IS_SYNC(inode) prior to returning to the caller of write(2). Writing a 10 meg file in 0.1 meg chunks is sped up by, err, a factor of fifty. That's a best case.
-
Jeff Garzik authored
At present, when mounted synchronously or with `chattr +S' in effect, ext2 syncs the indirect blocks for every new block when extending a file. This is not necessary, because a sync is performed on the way out of generic_file_write(). This will pick up all necessary data from inode->i_dirty_buffers and inode->i_dirty_data_buffers, and is sufficient. The patch removes all the syncing of indirect blocks. On a non-write-caching scsi disk, an untar of the util-linux tarball runs three times faster. Writing a 100 megabyte file in one megabyte chunks speeds up ten times. The patch also removes the intermediate indirect block syncing on the truncate() path. Instead, we sync the indirects at a single place, via inode->i_dirty_buffers. This not only means that the writes (may) cluster better. It means that we perform much, much less actual I/O during truncate, because most or all of the indirects will no longer be needed for the file, and will be invalidated. fsync() and msync() still work correctly. One side effect of this patch is that VM-initiated writepage() against a file hole will no longer block on writeout of indirect blocks. This is good.
-
Alan Cox authored
-
Andrew Morton authored
-
Arjan van de Ven authored
-
Andrew Morton authored
problem. In ext3_new_block() we increment i_blocks early, so the quota operation can be performed outside lock_super(). But if the block allocation ends up failing, we forget to undo the allocation. This is not a serious bug, and probably does not warrant an upgrade for production machines. Its effects are: 1) errors are generated from e2fsck and 2) users could appear to be over quota when they really aren't. The patch undoes the accounting operation if the allocation ends up failing.
-
Dave Jones authored
z_{read,write}[bwl] routines from 2.4.x.
-
Jean Tourrilhes authored
-
Jean Tourrilhes authored
-
Jean Tourrilhes authored
o [CRITICA] Fix race condition between disconnect and the rest o [CRITICA] Force synchronous unlink of URBs in disconnect o [CRITICA] Cleanup instance if disconnect before close <Following patch from Martin Diehl> o [CRITICA] Call usb_submit_urb() with GPF_ATOMIC
-
Jean Tourrilhes authored
o [FEATURE] Propagate mode of discovery to higher protocols o [CORRECT] Disable passive discovery in ircomm and irlan Prevent client and server to simultaneously connect to each other o [CORRECT] Force expiry of discovery log on LAP disconnect
-
Jean Tourrilhes authored
if socket is not connected, don't hangup, to allow passive operation
-
Jean Tourrilhes authored
-
- 03 Apr, 2002 5 commits
-
-
Jean Tourrilhes authored
o [CORRECT] Handle signals while IrSock is blocked on Tx o [CORRECT] Fix race condition in LAP when receiving with pf bit o [CRITICA] Prevent queuing Tx data before IrComm is ready o [FEATURE] Warn user of common misuse of IrLPT
-
Jean Tourrilhes authored
-
Dave Jones authored
* make sure to stop chip before enabling interrupt via request_irq
-
Dave Jones authored
* use Zorro-specific z_{read,write}[bwl] routines * remove superfluous include
-
Dave Jones authored
-