- 18 Jun, 2003 24 commits
-
-
Andrew Morton authored
Implement the designed locking around j_checkpoint_transactions. It was all pretty much there actually.
-
Andrew Morton authored
Go through all sites which use j_committing_transaction and ensure that the deisgned locking is correctly implemented there.
-
Andrew Morton authored
Implement the designed locking around journal->j_running_transaction. A lot more of the new locking scheme falls into place.
-
Andrew Morton authored
We now start to move onto the fields of the topmost JBD data structure: the journal. The patch implements the designed locking around the j_barrier_count member. And as a part of that, a lot of the new locking scheme is implemented. Several lock_kernel()s and sleep_on()s go away.
-
Andrew Morton authored
Provide the designed locking around the transaction's t_jcb callback list. It turns out that this is wholly redundant at present.
-
Andrew Morton authored
Implement the designed locking for t_outstanding_credits
-
Andrew Morton authored
Provide the designating locking for transaction_t.t_updates.
-
Andrew Morton authored
Now we move more into the locking of the transaction_t fields. t_nr_buffers locking is just an audit-and-commentary job.
-
Andrew Morton authored
This was a system-wide spinlock. Simple transformation: make it a filesystem-wide spinlock, in the JBD journal. That's a bit lame, and later it might be nice to make it per-transaction_t. But there are interesting ranking and ordering problems with that, especially around __journal_refile_buffer().
-
Andrew Morton authored
Implement the designated b_tnext locking. This also covers b_tprev locking.
-
Andrew Morton authored
Go through all b_next_transaction instances, implement locking rules. (Nothing to do here - b_transaction locking covered it)
-
Andrew Morton authored
Go through all use of b_transaction and implement the rules. Fairly straightforward.
-
Andrew Morton authored
Implement the designed locking schema around the journal_head.b_committed_data field.
-
Andrew Morton authored
We now start to move across the JBD data structure's fields, from "innermost" and outwards. Start with journal_head.b_frozen_data, because the locking for this field was partially implemented in jbd-010-b_committed_data-race-fix.patch. It is protected by jbd_lock_bh_state(). We keep the lock_journal() and spin_lock(&journal_datalist_lock) calls in place. Later, spin_lock(&journal_datalist_lock) is replaced by spin_lock(&journal->j_list_lock). Of course, this completion of the locking around b_frozen_data also puts a lot of the locking for other fields in place.
-
Andrew Morton authored
journal_unlock_journal_head() is misnamed: what it does is to drop a ref on the journal_head and free it if that ref fell to zero. It doesn't actually unlock anything. Rename it to journal_put_journal_head().
-
Andrew Morton authored
buffer_heads and journal_heads are joined at the hip. We need a lock to protect the joint and its refcounts. JBD is currently using a global spinlock for that. Change it to use one bit in bh->b_state.
-
Andrew Morton authored
This was a strange spinlock which was designed to prevent another CPU from ripping a buffer's journal_head away while this CPU was inspecting its state. Really, we don't need it - we can inspect that state directly from bh->b_state. So kill it off, along with a few things which used it which are themselves not actually used any more.
-
Andrew Morton authored
This is the start of the JBD locking rework. The aims of all this are to remove all lock_kernel() calls from JBD, to remove all lock_journal() calls (the context switch rate is astonishing when the lock_kernel()s are removed) and to remove all sleep_on() instances. The strategy which is taken is: a) Define the lcoking schema (this patch) b) Work through every JBD data structure and implement its locking fully, according to the above schema. We work from "innermost" data structures and outwards. It isn't guaranteed that the filesystem will work very well at all stages of this patch series. In this patch: Add commentary and various locks to jbd.h describing the locking scheme which is about to be implemented. Initialise the new locks. Coding-style goodness in jbd.h
-
Andrew Morton authored
From: Alex Tomas <bzzz@tmi.comex.ru> We have a race wherein the block allocator can decide that journal_head.b_committed_data is present and then will use it. But kjournald can concurrently free it and set the pointer to NULL. It goes oops. We introduce per-buffer_head "spinlocking" based on a bit in b_state. To do this we abstract out pte_chain_lock() and reuse the implementation. The bit-based spinlocking is pretty inefficient CPU-wise (hence the warning in there) and we may move this to a hashed spinlock later.
-
Andrew Morton authored
From: Alex Tomas <bzzz@tmi.comex.ru> This is a port from ext2 of the fuzzy counters (for Orlov allocator heuristics) and the hashed spinlocking (for the inode and bloock allocators).
-
Andrew Morton authored
From: Alex Tomas <bzzz@tmi.comex.ru> This patch weans ext3 off lock_super()-based protection for the inode and block allocators. It's basically the same as the ext2 changes. 1) each group has own spinlock, which is used for group counter modifications 2) sb->s_free_blocks_count isn't used any more. ext2_statfs() and find_group_orlov() loop over groups to count free blocks 3) sb->s_free_blocks_count is recalculated at mount/umount/sync_super time in order to check consistency and to avoid fsck warnings 4) reserved blocks are distributed over last groups 5) ext3_new_block() tries to use non-reserved blocks and if it fails then tries to use reserved blocks 6) ext3_new_block() and ext3_free_blocks do not modify sb->s_free_blocks, therefore they do not call mark_buffer_dirty() for superblock's buffer_head. this should reduce I/O a bit Also fix orlov allocator boundary case: In the interests of SMP scalability the ext2 free blocks and free inodes counters are "approximate". But there is a piece of code in the Orlov allocator which fails due to boundary conditions on really small filesystems. Fix that up via a final allocation pass which simply uses first-fit for allocatiopn of a directory inode.
-
Andrew Morton authored
Move some lock_kernel() calls from the caller to the callee, reducing holdtimes.
-
Andrew Morton authored
This is the start of the ext3 scalability rework. It basically comes in two halves: - ext3 BKL/lock_super removal and scalable inode/block allocators - JBD locking rework. The ext3 scalability work was completed a couple of months ago. The JBD rework has been stable for a couple of weeks now. My gut feeling is that there should be one, maybe two bugs left in it, but no problems have been discovered... Performance-wise, throughput is increased by up to 2x on dual CPU. 10x on 16-way has been measured. Given that current ext3 is able to chew two whole CPUs spinning on locks on a 4-way, that wasn't especially suprising. These patches were prepared by Alex Tomas <bzzz@tmi.comex.ru> and myself. First patch: ext3 lock_kernel() removal. The only reason why ext3 takes lock_kernel() is because it is requires by the JBD API. The patch removes the lock_kernels() from ext3 and pushes them down into JBD itself.
-
http://lia64.bkbits.net/to-linus-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 17 Jun, 2003 16 commits
-
-
David Mosberger authored
-
David Mosberger authored
-
David Mosberger authored
into tiger.hpl.hp.com:/data1/bk/lia64/to-linus-2.5
-
Miles Bader authored
-
Miles Bader authored
-
Miles Bader authored
-
Miles Bader authored
-
Anton Blanchard authored
I must not ignore compiler warnings. I must not ignore compiler warnings. I must not ignore compiler warnings.
-
John Levon authored
Avoid the linear list walk of get_exec_dcookie() when we've switched to a task using the same mm.
-
John Levon authored
Use the IO-APIC NMI delivery when the local APIC performance counter delivery is not available. By Zwane Mwaikambo.
-
John Levon authored
Reduce the possibility of dazed-and-confuseds.
-
Paul Fulghum authored
- Fix 'badness in local_bh_enable' warning This involved moving dev_queue_xmit() calls outside of sections with spinlock held. - Fix 'fix old protocol handler' warning This includes accounting for shared skbs, setting protocol .data field to non-null, and adding per device synchronization to receive handler. This has been tested in PPP and Cisco modes with and with out the keepalives enabled on a SMP machine.
-
Matthew Wilcox authored
This patch creates fs/Kconfig.binfmt and converts all architectures to use it. I took the opportunity to spruce up the a.out help text for the new millennium.
-
Neil Brown authored
request might traverse several export points which may have different uid squashing.
-
Neil Brown authored
From: "William A.(Andy) Adamson" <andros@citi.umich.edu>
-
Neil Brown authored
From: "William A.(Andy) Adamson" <andros@citi.umich.edu> Put all clients in a LRU list and use a "work_queue" to expire old clients periodically.
-