- 25 Nov, 2002 4 commits
-
-
http://linux-isdn.bkbits.net/linux-2.5.isdnLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Ralf Bächle authored
init/do_mounts.c is using the BLKGETSIZE ioctl which expects a pointer to an unsigned long but actually it passes a pointer to an int which of course is blowing up on 64-bit systems.
-
bk://linux-scsi.bkbits.net/scsi-dledfordLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
David S. Miller authored
filp_open expects a kernel pointer not a user one.
-
- 24 Nov, 2002 9 commits
-
-
Kai Germaschewski authored
The HFC subdrivers chose to use atomic ops to re-implement sth like broken spinlocks. That's now gone. Basically all races should be taken care of by the fact that we take cs->lock before going down to the hardware, this locks protects from concurrent accesses from IRQ context. Well, some rare paths (setting mode etc) don't take the lock yet, so it's not quite done yet.
-
Kai Germaschewski authored
More duplicated code gone. Also, remove the unused skb argument from xmit_pull_req_b().
-
Kai Germaschewski authored
As usual, lots of duplicated code gone.
-
Kai Germaschewski authored
Another bit of D-channel busy handling can move as well.
-
Kai Germaschewski authored
The FIFO based cards can share the data underrun handling.
-
Kai Germaschewski authored
More code which can be nicely shared...
-
Kai Germaschewski authored
Same as for the B-Channels. We need to make sure that this doesn't race with a new frame arriving from the upper layer, which will be done shortly by sharing the upper layer interface as well. Protection is provided by card->lock, which is now always taken around the entire interrupt - more coarse-grained than possible, but still better than the global cli(), and correctness and simplicity first.
-
Kai Germaschewski authored
No reason to duplicate sched_event() all over the drivers...
-
Kai Germaschewski authored
Now repeat the steps of unifying the xmit path for B-Channels for D-Channel handling. Parts of xmit_fill_fifo() can easily be shared.
-
- 23 Nov, 2002 15 commits
-
-
Kai Germaschewski authored
Something else which can be nicely shared amongst various drivers...
-
Kai Germaschewski authored
Again, the hardware is similar enough to use a shared function, only the method of resetting the transmitter needs to be specified.
-
Kai Germaschewski authored
If we lose a fragment of a frame, we need to restart from the beginning, share that code.
-
Kai Germaschewski authored
Six of the hardware chips for B-channel xmit work so similar that we can share the code to handle XPR (transmit pool ready).
-
Kai Germaschewski authored
More obviously duplicated code moved into just one place.
-
Kai Germaschewski authored
Again, lots of drivers duplicated code to start transmitting a B-channel frame. Now we do this from one place, and the converted drivers are obviously serialized w.r.t. calling ->BC_Send_Data()
-
Kai Germaschewski authored
A lot of drivers do the same thing when they're ready for transmitting the next frame, so let's share that code.
-
Kai Germaschewski authored
Again, this code sequence is repeated in a lot of drivers, so separate it out.
-
Kai Germaschewski authored
There's no need for each hardware driver to implement its own (short) xxx_schedule_event().
-
Kai Germaschewski authored
For the most part, Linux drivers shouldn't muck with that low-level stuff at all, but rather leave it to the generic layer.
-
Kai Germaschewski authored
We used to call writewakeup() directly from handling the "frame sent" IRQ which potentially would call back up into the common ISDN layer and higher up still in hard-IRQ context. Instead, queue the event and use the usual mechanism of passing it up.
-
Kai Germaschewski authored
We use a per-card spinlock to protect interrupt and normal tx path from each other.
-
Kai Germaschewski authored
Fairly obviously, but untested.
-
Kai Germaschewski authored
When an application needs an ISDN channel ("slot"), it now should use get_slot() which will make sure that the corresponding driver module doesn't unregister under us.
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/home/kai/src/kernel/v2.5/linux-2.5.isdn
-
- 22 Nov, 2002 12 commits
-
-
Doug Ledford authored
touching a /proc/* file doesn't pin a module in memory
-
Doug Ledford authored
Fix the detach code so it doesn't call sysfs unregister with a lock held
-
bk://linux.bkbits.net/linux-2.5Doug Ledford authored
into flossy.devel.redhat.com:/usr/local/home/dledford/bk/linus-2.5
-
Doug Ledford authored
Add in usage of new same_target_siblings list Add scsi_release_commandblocks() call to scsi_free_sdev() Make all scsi device freeing use scsi_free_sdev()
-
Linus Torvalds authored
-
bk://bk.arm.linux.org.ukLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Russell King authored
Fix compilation errors for do_fork() and print_symbol()
-
bk://cifs.bkbits.net/linux-2.5cifsLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Andrew Morton authored
Implements a new set of block address_space_operations which will never attach buffer_heads to file pagecache. These can be turned on for ext2 with the `nobh' mount option. During write-intensive testing on a 7G machine, total buffer_head storage remained below 0.3 megabytes. And those buffer_heads are against ZONE_NORMAL pagecache and will be reclaimed by ZONE_NORMAL memory pressure. This work is, of course, a special for the huge highmem machines. Possibly it obsoletes the buffer_heads_over_limit stuff (which doesn't work terribly well), but that code is simple, and will provide relief for other filesystems. It should be noted that the nobh_prepare_write() function and the PageMappedToDisk() infrastructure is what is needed to solve the problem of user data corruption when the filesystem which backs a sparse MAP_SHARED mapping runs out of space. We can use this code in filemap_nopage() to ensure that all mapped pages have space allocated on-disk. Deliver SIGBUS on ENOSPC. This will require a new address_space op, I expect.
-
Andrew Morton authored
This patch is a general solution to the situation where a zone is full of pinned pages. This can come about if: a) Someone has allocated all of ZONE_DMA for IO buffers b) Some application is mlocking some memory and a zone ends up full of mlocked pages (can happen on a 1G ia32 system) c) All of ZONE_HIGHMEM is pinned in hugetlb pages (can happen on 1G machines) We'll currently burn 10% of CPU in kswapd when this happens, although it is quite hard to trigger. The algorithm is: - If page reclaim has scanned 2 * the total number of pages in the zone and there have been no pages freed in that zone then mark the zone as "all unreclaimable". - When a zone is "all unreclaimable" page reclaim almost ignores it. We will perform a "light" scan at DEF_PRIORITY (typically 1/4096'th of the zone, or 64 pages) and then forget about the zone. - When a batch of pages are freed into the zone, clear its "all unreclaimable" state and start full scanning again. The assumption being that some state change has come about which will make reclaim successful again. So if a "light scan" actually frees some pages, the zone will revert to normal state immediately. So we're effectively putting the zone into "low power" mode, and lightly polling it to see if something has changed. The code works OK, but is quite hard to test - I mainly tested it by pinning all highmem in hugetlb pages.
-
Andrew Morton authored
Strengthen the `incremental min' logic in the page allocator. Currently it is allowing the allocation to succeed if the zone has free_pages >= pages_high. This was to avoid a lockup corner case in which all the zones were at pages_high so reclaim wasn't doing anything, but the incremental min refused to take pages from those zones anyway. But we want the incremental min zone protection to work. So: - Only allow the allocator to dip below the incremental min if he cannot run direct reclaim. - Change the page reclaim code so that on the direct reclaim path, the caller can free pages beyond ->pages_high. So if the incremental min test fails, the caller will go and free some more memory. Eventually, the caller will have freed enough memory for the incremental min test to pass against one of the zones.
-
Andrew Morton authored
The vm_writeback address_space operation was designed to provide the VM with a "clustered writeout" capability. It allowed the filesystem to perform more intelligent writearound decisions when the VM was trying to clean a particular page. I can't say I ever saw any real benefit from this - not much writeout actually happens on that path - quite a lot of work has gone into minimising it actually. The default ->vm_writeback a_op which I provided wrote back the pages in ->dirty_pages order. But there is one scenario in which this causes problems - writing a single 4G file with mem=4G. We end up with all of ZONE_NORMAL full of dirty pages, but all writeback effort is against highmem pages. (Because there is about 1.5G of dirty memory total). Net effect: the machine stalls ZONE_NORMAL allocation attempts until the ->dirty_pages writeback advances onto ZONE_NORMAL pages. This can be fixed most sweetly with additional radix-tree infrastructure which will be quite complex. Later. So this patch dumps it all, and goes back to using writepage against individual pages as they come off the LRU.
-