- 24 Nov, 2002 9 commits
-
-
Kai Germaschewski authored
The HFC subdrivers chose to use atomic ops to re-implement sth like broken spinlocks. That's now gone. Basically all races should be taken care of by the fact that we take cs->lock before going down to the hardware, this locks protects from concurrent accesses from IRQ context. Well, some rare paths (setting mode etc) don't take the lock yet, so it's not quite done yet.
-
Kai Germaschewski authored
More duplicated code gone. Also, remove the unused skb argument from xmit_pull_req_b().
-
Kai Germaschewski authored
As usual, lots of duplicated code gone.
-
Kai Germaschewski authored
Another bit of D-channel busy handling can move as well.
-
Kai Germaschewski authored
The FIFO based cards can share the data underrun handling.
-
Kai Germaschewski authored
More code which can be nicely shared...
-
Kai Germaschewski authored
Same as for the B-Channels. We need to make sure that this doesn't race with a new frame arriving from the upper layer, which will be done shortly by sharing the upper layer interface as well. Protection is provided by card->lock, which is now always taken around the entire interrupt - more coarse-grained than possible, but still better than the global cli(), and correctness and simplicity first.
-
Kai Germaschewski authored
No reason to duplicate sched_event() all over the drivers...
-
Kai Germaschewski authored
Now repeat the steps of unifying the xmit path for B-Channels for D-Channel handling. Parts of xmit_fill_fifo() can easily be shared.
-
- 23 Nov, 2002 15 commits
-
-
Kai Germaschewski authored
Something else which can be nicely shared amongst various drivers...
-
Kai Germaschewski authored
Again, the hardware is similar enough to use a shared function, only the method of resetting the transmitter needs to be specified.
-
Kai Germaschewski authored
If we lose a fragment of a frame, we need to restart from the beginning, share that code.
-
Kai Germaschewski authored
Six of the hardware chips for B-channel xmit work so similar that we can share the code to handle XPR (transmit pool ready).
-
Kai Germaschewski authored
More obviously duplicated code moved into just one place.
-
Kai Germaschewski authored
Again, lots of drivers duplicated code to start transmitting a B-channel frame. Now we do this from one place, and the converted drivers are obviously serialized w.r.t. calling ->BC_Send_Data()
-
Kai Germaschewski authored
A lot of drivers do the same thing when they're ready for transmitting the next frame, so let's share that code.
-
Kai Germaschewski authored
Again, this code sequence is repeated in a lot of drivers, so separate it out.
-
Kai Germaschewski authored
There's no need for each hardware driver to implement its own (short) xxx_schedule_event().
-
Kai Germaschewski authored
For the most part, Linux drivers shouldn't muck with that low-level stuff at all, but rather leave it to the generic layer.
-
Kai Germaschewski authored
We used to call writewakeup() directly from handling the "frame sent" IRQ which potentially would call back up into the common ISDN layer and higher up still in hard-IRQ context. Instead, queue the event and use the usual mechanism of passing it up.
-
Kai Germaschewski authored
We use a per-card spinlock to protect interrupt and normal tx path from each other.
-
Kai Germaschewski authored
Fairly obviously, but untested.
-
Kai Germaschewski authored
When an application needs an ISDN channel ("slot"), it now should use get_slot() which will make sure that the corresponding driver module doesn't unregister under us.
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/home/kai/src/kernel/v2.5/linux-2.5.isdn
-
- 22 Nov, 2002 16 commits
-
-
Linus Torvalds authored
-
bk://bk.arm.linux.org.ukLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Russell King authored
Fix compilation errors for do_fork() and print_symbol()
-
bk://cifs.bkbits.net/linux-2.5cifsLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Andrew Morton authored
Implements a new set of block address_space_operations which will never attach buffer_heads to file pagecache. These can be turned on for ext2 with the `nobh' mount option. During write-intensive testing on a 7G machine, total buffer_head storage remained below 0.3 megabytes. And those buffer_heads are against ZONE_NORMAL pagecache and will be reclaimed by ZONE_NORMAL memory pressure. This work is, of course, a special for the huge highmem machines. Possibly it obsoletes the buffer_heads_over_limit stuff (which doesn't work terribly well), but that code is simple, and will provide relief for other filesystems. It should be noted that the nobh_prepare_write() function and the PageMappedToDisk() infrastructure is what is needed to solve the problem of user data corruption when the filesystem which backs a sparse MAP_SHARED mapping runs out of space. We can use this code in filemap_nopage() to ensure that all mapped pages have space allocated on-disk. Deliver SIGBUS on ENOSPC. This will require a new address_space op, I expect.
-
Andrew Morton authored
This patch is a general solution to the situation where a zone is full of pinned pages. This can come about if: a) Someone has allocated all of ZONE_DMA for IO buffers b) Some application is mlocking some memory and a zone ends up full of mlocked pages (can happen on a 1G ia32 system) c) All of ZONE_HIGHMEM is pinned in hugetlb pages (can happen on 1G machines) We'll currently burn 10% of CPU in kswapd when this happens, although it is quite hard to trigger. The algorithm is: - If page reclaim has scanned 2 * the total number of pages in the zone and there have been no pages freed in that zone then mark the zone as "all unreclaimable". - When a zone is "all unreclaimable" page reclaim almost ignores it. We will perform a "light" scan at DEF_PRIORITY (typically 1/4096'th of the zone, or 64 pages) and then forget about the zone. - When a batch of pages are freed into the zone, clear its "all unreclaimable" state and start full scanning again. The assumption being that some state change has come about which will make reclaim successful again. So if a "light scan" actually frees some pages, the zone will revert to normal state immediately. So we're effectively putting the zone into "low power" mode, and lightly polling it to see if something has changed. The code works OK, but is quite hard to test - I mainly tested it by pinning all highmem in hugetlb pages.
-
Andrew Morton authored
Strengthen the `incremental min' logic in the page allocator. Currently it is allowing the allocation to succeed if the zone has free_pages >= pages_high. This was to avoid a lockup corner case in which all the zones were at pages_high so reclaim wasn't doing anything, but the incremental min refused to take pages from those zones anyway. But we want the incremental min zone protection to work. So: - Only allow the allocator to dip below the incremental min if he cannot run direct reclaim. - Change the page reclaim code so that on the direct reclaim path, the caller can free pages beyond ->pages_high. So if the incremental min test fails, the caller will go and free some more memory. Eventually, the caller will have freed enough memory for the incremental min test to pass against one of the zones.
-
Andrew Morton authored
The vm_writeback address_space operation was designed to provide the VM with a "clustered writeout" capability. It allowed the filesystem to perform more intelligent writearound decisions when the VM was trying to clean a particular page. I can't say I ever saw any real benefit from this - not much writeout actually happens on that path - quite a lot of work has gone into minimising it actually. The default ->vm_writeback a_op which I provided wrote back the pages in ->dirty_pages order. But there is one scenario in which this causes problems - writing a single 4G file with mem=4G. We end up with all of ZONE_NORMAL full of dirty pages, but all writeback effort is against highmem pages. (Because there is about 1.5G of dirty memory total). Net effect: the machine stalls ZONE_NORMAL allocation attempts until the ->dirty_pages writeback advances onto ZONE_NORMAL pages. This can be fixed most sweetly with additional radix-tree infrastructure which will be quite complex. Later. So this patch dumps it all, and goes back to using writepage against individual pages as they come off the LRU.
-
Andrew Morton authored
blk_congestion_wait() is a utility function which various callers use to throttle themselves to the rate at which the IO system can retire writes. The current implementation refuses to wait if no queues are "congested" (>75% of requests are in flight). That doesn't work if the queue is so huge that it can hold more than 40% (dirty_ratio) of memory. The queue simply cannot enter congestion because the VM refuses to allow more than 40% of memory to be dirtied. (This spin could happen with a lot of normal-sized queues too) So this patch simply changes blk_congestion_wait() to throttle even if there are no congested queues. It will cause the caller to sleep until someone puts back a write request against any queue. (Nobody uses blk_congestion_wait for read congestion). The patch adds new state to backing_dev_info->state: a couple of flags which indicate whether there are _any_ reads or writes in flight against that queue. This was added to prevent blk_congestion_wait() from taking a nap when there are no writes at all in flight. But the "are there any reads" info could be used to defer background writeout from pdflush, to reduce read-vs-write competition. We'll see. Because the large request queues have made a fundamental change: blocking in get_request_wait() has been the main form of VM throttling for years. But with large queues it doesn't work any more - all throttling happens in blk_congestion_wait(). Also, change io_schedule_timeout() to propagate the schedule_timeout() return value. I was using that in some debug code, but it should have been like that from day one.
-
Andrew Morton authored
From Roman Zippel. Don't assume that physical memory starts at physical address zero.
-
Andrew Morton authored
Patch from Stephen Tweedie "In looking at the fix for the ext3 Orlov double-accounting bug, I noticed a change to the sb->s_dir_count accounting, restoring a missing s_dir_count++ when we allocate a new directory. However, I can't find anywhere in the code where we decrement this again on directory deletion, neither in ext2 nor in ext3, in 2.4 nor in 2.5." Locking is via lock_super().
-
Andrew Morton authored
There is a warning in there to detect when block_write_full_page() attaches buffers to a blockdev page. This is a bad thing because that page's blocks may then overlap blocks from a different address_space. So I disallowed it. But the message can be triggered when an application is mmapping a blockdev MAP_SHARED. Apparently INND likes to do this. So remove the warning.
-
Andrew Morton authored
Patch from Christopher Li <chrisl@vmware.com> This little patch will fix two place in htree code which forget the "cpu_to_le16" converting . This bug causes incorrect record length on PPC. Thanks Franz for report the problem.
-
Andrew Morton authored
Patch from Andreas Gruenbacher <agruen@suse.de> The setxattr inode operation is defined like this in 2.4 and 2.5: int (*setxattr) (struct dentry *dentry, const char *name, void *value, size_t size, int flags); the original type of the value parameter was `const void *'; the const obviously has been lost at some point. The definition should be: int (*setxattr) (struct dentry *dentry, const char *name, const void *value, size_t size, int flags);
-
Andrew Morton authored
The page allocator has traditionally just gone BUG when it sees a page in a bad state. This is usually due to hardware errors, sometimes software errors. I'm proposing that we not go BUG() any more, but print lots (and lots) of diagnostic info and try to continue. Might be a bit controversial.
-
Andrew Morton authored
balance_dirty_pages() is too expensive to call once-per-page. Use the ratelimited version.
-