- 11 Nov, 2002 27 commits
-
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Art Haas authored
-
Rusty Russell authored
This patch provides basic x86 support for modules.
-
Rusty Russell authored
This is an implementation of the in-kernel module loader extending the try_inc_mod_count() primitive and making its use compulsory. This has the benifit of simplicity, and similarity to the existing scheme. To reduce the cost of the constant increments and decrements, reference counters are lockless and per-cpu. Eliminated (coming in following patches): o Modversions o Module parameters o kallsyms o EXPORT_SYMBOL_GPL and MODULE_LICENCE checks o DEVICE_TABLE support. New features: o Typesafe symbol_get/symbol_put o Single "insert this module" syscall interface allows trivial userspace. o Raceless loading and unloading You will need the trivial replacement module utilities from: http://ozlabs.org/~rusty/module-init-tools-0.6.tar.gz
-
Rusty Russell authored
By Kai Germaschewski. This patch adds a -DKBUILD_MODNAME to the kernel compile, which contains the base of the module name which is being built. - Some sreorganization of the c_flags since they're needed for generating modversions (.ver) and compiling - Use the right KBUILD_MODNAME also when the user just wants a .i/.s/.lst file for debugging and also when generating modversions - It looks like with your current approach you can't have a ',' or '-' in KBUILD_MODNAME - however, that means that KBUILD_MODNAME is not quite right for passing module parameters for built-in modules on the command line, it would be confusing to pass parameters for ide-cd as ide_cd.foo=whatever. So that part could use a little more thought. - If you think your module_names trick makes a noticable difference, feel free to re-add it. - It's possible that objects are linked into more than one module - I suppose this shouldn't be a problem, since these objects hopefully don't have a module_init() nor do they export symbols. Not sure if your patch did handle this.
-
Alexander Viro authored
- compile fixes in amiflop.c - removal of dead local variables in ll_rw_blk.c - removed gratitious devfs_get_handle() in usb/input/hiddev.c (no need to do lookup for "usb" and then create "hid" in there - enough to create "usb/hid" at once).
-
Alexander Viro authored
dasd.c forgot to set ->private_data, but was using it ;-/ Fixed. Remaining dasd_devmap_from_kdev() callers switched to dasd_devmap_from_bdev() (other than call from dasd_devmap_from_bdev(), that is). dasd_devmap_from_kdev() merged into dasd_devmap_from_bdev().
-
Alexander Viro authored
Eliminated several gratitious ->bd_dev uses.
-
Alexander Viro authored
Bunch of kdevname() uses replaced with bdevname(). __bdevname() switched from kdev_t to dev_t; callers updated.
-
Alexander Viro authored
-
Alexander Viro authored
Compile fixes, cleanup.
-
Alexander Viro authored
RAID autoconfig rewritten to use syscalls and moved into do_mounts.c; use of devfs_get_handle() in do_mounts.c also rewritten in syscalls.
-
Alexander Viro authored
paride/pseudo.h fed through Lindent, use of timer replaced with schedule_delayed_work() - that's what the old code tried to emulate.
-
Alexander Viro authored
Instead of user_path_walk() and comparing dentries, sys_swapoff() opens its argument and compares ->i_mapping. Result: slightly simpler code and swapoff(2) becomes tolerant to e.g. swapon /dev/sda2 switch root from initrd to sda1 .... swapoff /dev/sda2 # where /dev is from sda1, not from initrd current tree fails in the case above (different dentries -> no love).
-
Alexander Viro authored
Spotted by Andries - cloning namespace assumes that new tree is congruent to the old one (when switching root/cwd) but actually inverts the order of children in each node.
-
Hugh Dickins authored
Buffer I/O error on device loop: its use of sendfile is (trivially) broken - retval is usually count done, only an error when negative. This code (like the old one) does not correctly handle partial reads. Nearby spinlocking clearly bogus, delete instead of remarking on it.
-
Dave Kleikamp authored
This file was somehow skipped (along with jfs_acl.h> when I checked in the ACL support.
-
- 10 Nov, 2002 13 commits
-
-
Linus Torvalds authored
-
Andrew Morton authored
This patch will break some userspace monitoring apps in the name of having sane disk statistics in 2.6.x. Patch from Rick Lindsley <ricklind@us.ibm.com> In 2.5.46, there are now disk statistics being collected twice: once for gendisk/hd_struct, and once for dkstat. They are collecting the same thing. This patch removes dkstat, which also had the disadvantage of being limited by DK_MAX_MAJOR and DK_MAX_DISK. (Those #defines are removed too.) In addition, this patch removes disk statistics from /proc/stat since they are now available via sysfs and there seems to have been a general preference in previous discussions to "clean up" /proc/stat. Too many disks being reported in /proc/stat also caused buffer overflows when trying to print out the data. The code in led.c from the parisc architecture has not apparently been recompiled under recent versions of 2.5, since it references kstat.dk_drive which doesn't exist in later versions. Accordingly, I've added an #ifdef 0 and a comment to that code so that it may at least compile, albeit without one feature -- a step up from its state now. If it is preferable to keep the broken code in, that patch may easily be excised from below.
-
Andrew Morton authored
We're currently incrementing /proc/vmstat:pgalloc in front of the per-cpu page queues, and incrementing /proc/vmstat:pgfree behind the per-cpu queues. So they get out of whack. Change it so that we increment the counters each time someone requests a page. ie: they're both in front of the queues. Also, remove a duplicated prep_new_page() call and as a consequence, drop the whole additional list walk in rmqueue_bulk().
-
Andrew Morton authored
There was some strange code in the __getblk()/__find_get_block()/ __bread() area which was performimg multiple bh_lru_install() calls as well as multiple touch_buffer() calls. Fix all that up. We only need to run bh_lru_install() and touch_buffer() in __find_get_block(). Because if the block wasn't found, __getblk() will create it and will re-run __find_get_block(). Also document a few things and make a couple of internal symbols static to buffer.c Also, don't run __find_get_block() from within unmap_underlying_metadata(). We hardly expect to find that block inside the LRU. And we hardly expect to use it as metadata in the near future so there's no point in letting it evict another buffer if we found it. So just go straight into the pagecache lookup for unmap_underlying_metadata().
-
Andrew Morton authored
Patch from Lev Makhlis <mlev@despammed.com> The disk accounting will overflow after 4,000,000 seconds. Extend that by a factor of 1000.
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> This patch makes various private structures and procedures static.
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> This patch removes hugetlb's intrusion into /proc/
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> This patch removes hugetlb's intrusion into kernel/sysctl.c
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> This patch internalizes hugetlb initialization, implementing a command-line option in the process.
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> This patch removes the unused function unlink_vma().
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> This patch eliminates zap_hugetlb_resources, along with its usages. This actually fixes bugs, as zap_hugetlb_resources was itself buggy.
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> Idle time accounting is disturbed by the iowait statistics, for several reasons: (1) iowait time is not subdivided among cpus. The only way the distinction between idle time subtracted from cpus (in order to be accounted as iowait) can be made is by summing counters for a total and dividing the individual tick counters by the proportions. Any tick type resolution which is not properly per-cpu breaks this, meaning that cpus which are entirely idle, when any iowait is present on the system, will have all idle ticks accounted to iowait instead of true idle time. (2) kstat_read_proc() misreports iowait time The idle tick counter is passed twice to the sprintf(), once in the idle tick position, and once in the iowait tick position. (3) performance enhancement The O(1) scheduler was very carefully constructed to perform accesses only to localized cachelines whenever possible. The global counter violates one of its core design principles, and the localization of "most" accesses is in greater harmony with its overall design and provides (at the very least) a qualitative performance improvement wrt. cache. The method of correcting this is simple: embed an atomic iowait counter in the runqueues, find the runqueue being manipulated in io_schedule(), increment its atomic counter prior to schedule(), and decrement it after returning from schedule(), which is guaranteed to be the same one, as the counter incremented is tracked as a variable local to the procedure. Then simply sum to obtain a global iowait statistic. (Atomicity is required as the post-wait decrement may occur on a different cpu from the one owning the counter.) io_schedule() and io_schedule_timeout() are moved to sched.c as they must access the runqueues, which are private to sched.c, and nr_iowait() is created in order to export the sum of all runqueues' nr_iowait().
-
Andrew Morton authored
A patch from Janet Morgan <janetmor@us.ibm.com> If you feed an iovec with a bad address not at the zeroeth segment into readv or writev, it returns the wrong value. iovec 1: base is 8050b20 len is 64 iovec 2: base is ffffffff len is 64 iovec 3: base is 8050ba0 len is 64 The writev should return 64 bytes but is returning 128 This is because we've added the new segment's length into `count' before running access_ok(). The patch changes it to fix that up on the slow path, if access_ok() fails.
-