- 03 Jul, 2003 5 commits
-
-
Linus Torvalds authored
uses it for now, but I needed it for some tuning tests, and it is potentially useful for others.
-
bk://linux-pnp.bkbits.net/pnp-2.5Linus Torvalds authored
into home.osdl.org:/home/torvalds/v2.5/linux
-
Adam Belay authored
This patch corrects a trivial thinko in the manual resource api.
-
Adam Belay authored
This patch updates the resource manager so that it actually assigns disabled resources when they are requested by the device.
-
Adam Belay authored
Some devices will allow for individual resources to be disabled, even when the device as a whole is active. The current PnP resource manager is not handling this situation properly. This patch corrects the issue by detecting disabled resources and then flagging them. The pnp layer will now skip over any disabled resources. Interface updates have also been included so that we can properly display resource tables when a resource is disabled. Also note that a new flag "IORESOURCE_DISABLED" has been added to linux/ioports.h.
-
- 02 Jul, 2003 31 commits
-
-
Linus Torvalds authored
used to work due to some random magic indirect include, but broke lately. Do the obvious fix.
-
Rusty Russell authored
This moves the ksoftirqd pointers out of the irq_stat struct, and uses a normal per-cpu variable. It's not that time critical, nor referenced in assembler. This moves us closer to making irq_stat a per-cpu variable. Because some archs have hardcoded asm references to offsets in this structure, I haven't touched non-x86. The __ksoftirqd_task field is unused in other archs, too.
-
Rusty Russell authored
Noone seems to use __syscall_count. Remove the field from i386 irq_cpustat_t struct, and the generic accessor macros. Because some archs have hardcoded asm references to offsets in this structure, I haven't touched non-x86, but doing so is usually trivial.
-
Rusty Russell authored
Rather trivial conversion. Tested on SMP.
-
Rusty Russell authored
The function cpu_raise_softirq() takes a softirq number, and a cpu number, but cannot be used with cpu != smp_processor_id(), because there's no locking around the pending softirq lists. Since noone does this, remove that arg. As per Linus' suggestion, names changed: raise_softirq(int nr) cpu_raise_softirq(int cpu, int nr) -> raise_softirq_irqoff(int nr) __cpu_raise_softirq(int cpu, int nr) -> __raise_softirq_irqoff(int nr)
-
Andrew Morton authored
I though Scott had recently merged this but it seems not. We'll be needing this patch if you merge Manfred's page unmapping debug patch.
-
Andrew Morton authored
From: Jens Axboe <axboe@suse.de> It fixes a hang when performing large I/O's. Has been tested and acked by the maintainer, "Wiran, Francis" <francis.wiran@hp.com>.
-
Andrew Morton authored
From: bert hubert <ahu@ds9a.nl> Attached patch adds a range check to LOG_BUF_SHIFT and clarifies the configuration somewhat. I managed to build a non-booting kernel because I thought 64 was a nice power of two, which lead to the kernel blocking when it tried to actually use or allocate a 2^64 buffer.
-
Andrew Morton authored
CPU0 CPU1 journal_get_write_access(bh) (Add buffer to t_reserved_list) journal_get_write_access(bh) (It's already on t_reserved_list: nothing to do) (We decide we don't want to journal the buffer after all) journal_release_buffer() (It gets pulled off the transaction) journal_dirty_metadata() (The buffer isn't on the reserved list! The kernel explodes) Simple fix: just leave the buffer on t_reserved_list in journal_release_buffer(). If nobody ends up claiming the buffer then it will get thrown away at start of transaction commit.
-
Andrew Morton authored
If load_elf_binary() (and the other binary handlers) fail after flush_old_exec() (for example, in setup_arg_pages()) then do_execve() will go through and do mmdrop(bprm.mm). But bprm.mm is now current->mm. We've just freed the current process's mm. The kernel dies in a most ghastly manner. Fix that up by nulling out bprm.mm in flush_old_exec(), at the point where we consumed the mm. Handle the null pointer in the do_execve() error path. Also: don't open-code free_arg_pages() in do_execve(): call it instead.
-
Andrew Morton authored
ext2's inode allocator will call find_group_orlov(), which will return a suitable blockgroup in which the inode should be allocated. But by the time we actually try to allocate an inode in the blockgroup, other CPUs could have used them all up. ext2 will bogusly fail with "ext2_new_inode: Free inodes count corrupted in group NN". To fix this we just advance onto the next blockgroup if the rare race happens. If we've scanned all blockgroups then return -ENOSPC. (This is a bit inaccurate: after we've scanned all blockgroups, there may still be available inodes due to inode freeing activity in other blockgroups. This cannot be fixed without fs-wide locking. The effect is a slightly early ENOSPC in a nearly-full filesystem).
-
Andrew Morton authored
From: Stephen Smalley <sds@epoch.ncsc.mil> This patch against 2.5.73 replaces vm_enough_memory with a security hook per Alan Cox's suggestion so that security modules can completely replace the logic if desired. Note that the patch changes the interface to follow the convention of the other security hooks, i.e. return 0 if ok or -errno on failure (-ENOMEM in this case) rather than returning a boolean. It also exports various variables and functions required for the vm_enough_memory logic.
-
Andrew Morton authored
From: William Lee Irwin III <wli@holomorphy.com> This patch allows architectures to micro-optimize lowmem_page_address() at their whims. Roman Zippel originally wrote and/or suggested this back when dependencies on page->virtual existing were being shaken out. That's long-settled, so it's fine to do this now.
-
Andrew Morton authored
From: john stultz <johnstul@us.ibm.com> This patch catches a corner case in the lost-tick compensation code. There is a check to see if we overflowed between reads of the two time sources, however should the high res time source be slightly slower then what we calibrated, its possible to trigger this code when no ticks have been lost. This patch adds an extra check to insure we have seen more then one tick before we check for this overflow. This seems to resolve the remaining "time doubling" issues that I've seen reported.
-
Andrew Morton authored
From: john stultz <johnstul@us.ibm.com> The patch tries to resolve issues caused by running the TSC based lost tick compensation code on CPUs that change frequency (speedstep, etc). Should the CPU be in slow mode when calibrate_tsc() executes, the kernel will assume we have so many cycles per tick. Later when the cpu speeds up, the kernel will start noting that too many cycles have past since the last interrupt. Since this can occasionally happen, the lost tick compensation code then tries to fix this by incrementing jiffies. Thus every tick we end up incrementing jiffies many times, causing timers to expire too quickly and time to rush ahead. This patch detects when there has been 100 consecutive interrupts where we had to compensate for lost ticks. If this occurs, we spit out a warning and fall back to using the PIT as a time source. I've tested this on my speedstep enabled laptop with success, and others laptop users seeing this problem have reported it works for them. Also to ensure we don't fall back to the slower PIT too quickly, I tested the code on a system I have that looses ~30 ticks about every second and it can still manage to use the TSC as a good time source. This solves most of the "time doubling" problems seen on laptops. Additionally this revision has been modified to use the cleanups made in rename-timer_A1.
-
Andrew Morton authored
From: john stultz <johnstul@us.ibm.com> This renames the bad "timer" variable to "cur_timer" and moves externs to .h files.
-
Andrew Morton authored
From: Daniel Jacobowitz <dan@debian.org> Right now, CLONE_DETACHED threads silently vanish from GDB's sight when they exit. This patch lets the thread report its exit to the debugger, and then be auto-reaped as soon as it is collected, instead of being reaped as soon as it exits and not reported at all. GDB works either way, but this is more correct and will be useful for some later GDB patches.
-
Andrew Morton authored
From: Adrian Bunk <bunk@fs.tum.de> I got an error at the final linking with CONFIG_TC35815 enabled since the variables tc_readl and tc_writel are not available. The only place where they are defined is arch/mips/pci/ops-jmr3927.c.
-
Andrew Morton authored
Replace it with the blockdev inode's i_sem. And we only really need that for atomic access to file->f_pos.
-
Andrew Morton authored
Rework the file_ops.flush() API sothat it is no longer called under lock_kernel(). Push lock_kernel() down to all impementations except CIFS, which doesn't want it.
-
Andrew Morton authored
From: William Lee Irwin III <wli@holomorphy.com> Remove spurious BKL acquisitions in /proc/. The BKL is not required to access nr_threads for reporting, and get_locks_status() takes it internally, wrapping all operations with it.
-
Andrew Morton authored
lock_kernel() need not be held across truncate.
-
Andrew Morton authored
`attr' is on the stack, and the inode's contents can change as soon as we return from inode_change_ok() anyway. I can't see anything which is actually being locked in there.
-
Andrew Morton authored
Teach ramfs to use generic_file_llseek: default_llseek takes lock_kernel().
-
Andrew Morton authored
From: Dave Hansen <haveblue@us.ibm.com> The current numa meminfo code exports (via sysfs) pgdat->node_size, as totalram. This variable is consistently used elsewhere to mean "the number of physical pages that this particular node spans". This is _not_ what we want to see from meminfo, which is: "how much actual memory does this node have?" The following patch removes pgdat->node_size, and replaces it with ->node_spanned_pages. This is to avoid confusion with a new variable, node_present_pages, which is the _actual_ value that we want to export in meminfo. Most of the patch is a simple s/node_size/node_spanned_pages/. The node_size() macro is also removed, and replaced with new ones for node_{spanned,present}_pages() to avoid confusion. We were bitten by this problem in this bug: http://bugme.osdl.org/show_bug.cgi?id=818 Compiled and tested on NUMA-Q.
-
Andrew Morton authored
From: Manfred Spraul <manfred@colorfullife.com> Manfred's latest page unmapping debug patch. The patch adds support for a special debug mode to both the page and the slab allocator: Unused pages are removed from the kernel linear mapping. This means that now any access to freed memory will cause an immediate exception. Right now, read accesses remain totally unnoticed and write accesses may be catched by the slab poisoning, but usually far too late for a meaningfull bug report. The implementation is based on a new arch dependant function, kernel_map_pages(), that removes the pages from the linear mapping. It's right now only implemented for i386. Changelog: - Add kernel_map_pages() for i386, based on change_page_attr. If DEBUG_PAGEALLOC is not set, then the function is an empty stub. The stub is in <linux/mm.h>, i.e. it exists for all archs. - Make change_page_attr irq safe. Note that it's not fully irq safe due to the lack of the tlb flush ipi, but it's good enough for kernel_map_pages(). Another problem is that kernel_map_pages is not permitted to fail, thus PSE is disabled if DEBUG_PAGEALLOC is enabled - use kernel_map pages for the page allocator. - use kernel_map_pages for the slab allocator. I couldn't resist and added additional debugging support into mm/slab.c: * at kfree time, the complete backtrace of the kfree caller is stored in the freed object. * a ptrinfo() function that dumps all known data about a kernel virtual address: the pte value, if it belongs to a slab cache the cache name and additional info. * merging of common code: new helper function obj_dbglen and obj_dbghdr for the conversion between the user visible object pointers/len and the actual, internal addresses and len values.
-
Andrew Morton authored
From: Hugh Dickins <hugh@veritas.com> mremap's move_vma VM_LOCKED case was still wrong. If the do_munmap unmaps a part of new_vma, then its vm_start and vm_end from before cannot both be the right addresses for the make_pages_present range, and may BUG() there. We need [new_addr, new_addr+new_len) to be locked down; but move_page_tables already transferred the locked pages [new_addr, new_addr+old_len), and they're either held in a VM_LOCKED vma throughout, or temporarily in no vma: in neither case can be swapped out, so no need to run over that range again.
-
Dagfinn Ilmari Mannsåker authored
With the recent fixes, io_schedule needs to be exported for modular dm to work.
-
Linus Torvalds authored
-
Joe Thornber authored
Replace a couple of bogus yields() with schedule() and io_schedule() respectively.
-
Joe Thornber authored
-
- 01 Jul, 2003 4 commits
-
-
Paul Mundt authored
This includes the remainder of arch-specific part of the SH merge. This patch only effects arch/sh and include/asm-sh, against current BK.
-
Bartlomiej Zolnierkiewicz authored
Okay, since now corruption happens and there are some other issues to be resolved ("bad: scheduling while atomic" and "/proc/ide/hdX/identify") better set it by default to n for 2.5.74, also mark it EXPERIMENTAL.
-
Andi Kleen authored
AMD64 does not support a.out, so don't display it in the configuration.
-
Andi Kleen authored
AMD64 like IA64 needs to force IPC_64 in the IPC functions. This makes 2.5 compatible with 2.4 again.
-