- 20 Apr, 2005 9 commits
-
-
Herbert Xu authored
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnaldo Carvalho de Melo authored
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
Here is a revised alternative that uses BUG_ON/WARN_ON (as suggested by Herbert Xu) to eliminate NET_CALLER. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Noticed by Herbert Xu. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Be kind to userspace and don't force them to hardcode protocol families just to have it changed again once we support routing rules for more than one protocol family. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
While looking at this problem I noticed that IPv6 was sometimes looking at inet->recverr which is bogus. Here is a patch to correct that and use np->recverr. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
So here is a patch that introduces skb_store_bits -- the opposite of skb_copy_bits, and uses them to read/write the csum field in rawv6. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YOSHIFUJI Hideaki authored
From: Tushar Gohad <tgohad@mvista.com> Signed-off-by: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 19 Apr, 2005 31 commits
-
-
Hugh Dickins authored
Once all the MMU architectures define FIRST_USER_ADDRESS, remove hack from mmap.c which derived it from FIRST_USER_PGD_NR. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
Replace misleading definition of FIRST_USER_PGD_NR 0 by definition of FIRST_USER_ADDRESS 0 in all the MMU architectures beyond arm and arm26. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
ARM26 define FIRST_USER_ADDRESS as PAGE_SIZE (beyond the machine vectors when they are mapped low), and use that definition in place of locally defined MIN_MAP_ADDR. Previously, ARM26 permitted user mappings at 0 if the machine vectors were mapped high; but that's inconsistent with ARM, and FIRST_USER_ADDRESS would then have to be determined at runtime. Let's fix it at PAGE_SIZE throughout the architecture. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
ARM define FIRST_USER_ADDRESS as PAGE_SIZE (beyond the machine vectors when they are mapped low), and use that definition in place of locally defined MIN_MAP_ADDR. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
Remove use of FIRST_USER_PGD_NR from sys_mincore: it's inconsistent (no other syscall refers to it), unnecessary (sys_mincore loops over vmas further down) and incorrect (misses user addresses in ARM's first pgd). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
The patches to free_pgtables by vma left problems on any architectures which leave some user address page table entries unencapsulated by vma. Andi has fixed the 32-bit vDSO on x86_64 to use a vma. Now fix arm (and arm26), whose first PAGE_SIZE is reserved (perhaps) for machine vectors. Our calls to free_pgtables must not touch that area, and exit_mmap's BUG_ON(nr_ptes) must allow that arm's get_pgd_slow may (or may not) have allocated an extra page table, which its free_pgd_slow would free later. FIRST_USER_PGD_NR has misled me and others: until all the arches define FIRST_USER_ADDRESS instead, a hack in mmap.c to derive one from t'other. This patch fixes the bugs, the remaining patches just clean it up. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
Once we're strict about clearing away page tables, hugetlb_prefault can assume there are no page tables left within its range. Since the other arches continue if !pte_none here, let i386 do the same. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
While dabbling here in mmap.c, clean up mysterious "mpnt"s to "vma"s. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
ia64 and sparc64 hurriedly had to introduce their own variants of pgd_addr_end, to leapfrog over the holes in their virtual address spaces which the final clear_page_range suddenly presented when converted from pgd_index to pgd_addr_end. But now that free_pgtables respects the vma list, those holes are never presented, and the arch variants can go. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being called, and it wasn't obvious what to do about them. The ppc64 case turns out to be easy: the associated tables are noted elsewhere and freed later, safe to either skip its hugetlb areas or go through the motions of freeing nothing. Since ia64 does need a special case, restore to ppc64 the special case of skipping them. The ia64 hugetlb case has been broken since pgd_addr_end went in, though it probably appeared to work okay if you just had one such area; in fact it's been broken much longer if you consider a long munmap spanning from another region into the hugetlb region. In the ia64 hugetlb region, more virtual address bits are available than in the other regions, yet the page tables are structured the same way: the page at the bottom is larger. Here we need to scale down each addr before passing it to the standard free_pgd_range. Was about to write a hugely_scaled_down macro, but found htlbpage_to_page already exists for just this purpose. Fixed off-by-one in ia64 is_hugepage_only_range. Uninline free_pgd_range to make it available to ia64. Make sure the vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any other (safe to join huges? probably but don't bother). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro because mm doesn't contain the (32-bit emulation?) info needed. But it too is only needed because we ignore the end from the vma list. We could make flush_pgtables return that end, or unmap_vmas. Choose the latter, since it's a natural fit with unmap_mapping_range_vma needing to know its restart addr. This does make more than minimal change, but if unmap_vmas had returned the end before, this is how we'd have done it, rather than storing the break_addr in zap_details. unmap_vmas used to return count of vmas scanned, but that's just debug which hasn't been useful in a while; and if we want the map_count 0 on exit check back, it can easily come from the final remove_vm_struct loop. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
Recent woes with some arches needing their own pgd_addr_end macro; and 4-level clear_page_range regression since 2.6.10's clear_page_tables; and its long-standing well-known inefficiency in searching throughout the higher-level page tables for those few entries to clear and free: all can be blamed on ignoring the list of vmas when we free page tables. Replace exit_mmap's clear_page_range of the total user address space by free_pgtables operating on the mm's vma list; unmap_region use it in the same way, giving floor and ceiling beyond which it may not free tables. This brings lmbench fork/exec/sh numbers back to 2.6.10 (unless preempt is enabled, in which case latency fixes spoil unmap_vmas throughput). Beware: the do_mmap_pgoff driver failure case must now use unmap_region instead of zap_page_range, since a page table might have been allocated, and can only be freed while it is touched by some vma. Move free_pgtables from mmap.c to memory.c, where its lower levels are adapted from the clear_page_range levels. (Most of free_pgtables' old code was actually for a non-existent case, prev not properly set up, dating from before hch gave us split_vma.) Pass mmu_gather** in the public interfaces, since we might want to add latency lockdrops later; but no attempt to do so yet, going by vma should itself reduce latency. But what if is_hugepage_only_range? Those ia64 and ppc64 cases need careful examination: put that off until a later patch of the series. What of x86_64's 32bit vdso page __map_syscall32 maps outside any vma? And the range to sparc64's flush_tlb_pgtables? It's less clear to me now that we need to do more than is done here - every PMD_SIZE ever occupied will be flushed, do we really have to flush every PGDIR_SIZE ever partially occupied? A shame to complicate it unnecessarily. Special thanks to David Miller for time spent repairing my ceilings. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Linus Torvalds authored
for 13 driver core, sysfs, and debugfs fixes.
-
Linus Torvalds authored
for 11 aoe bugfix patches.
-
Linus Torvalds authored
-
Linus Torvalds authored
Yah, it does work to merge. Knock wood.
-
ecashin@coraid.com authored
I can't use list.h, since sk_buff doesn't have a list_head but instead has two struct sk_buff pointers, and I want to avoid any extra memory allocation. send outgoing packets in order Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
add support for disk statistics Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
add note about the need for deadlock-free sk_buff allocation Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
document env var for specifying number of partitions per dev Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
Alexey Dobriyan sparse cleanup Signed-off-by: Alexey Dobriyan <adobriyan@mail.ru> Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
don't try to free null bufpool Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
handle distros that have a udev rules file instead of dir Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
update driver version to 6 Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
allow multiple aoe devices with same MAC addr Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
ecashin@coraid.com authored
remove too-low cap on minor number Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
kay.sievers@vrfy.org authored
kobject_add() and kobject_del() don't emit hotplug events anymore. We need to do it ourselves now. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
kay.sievers@vrfy.org authored
kobject_add() and kobject_del() don't emit hotplug events anymore. We need to do it ourselves now. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
kay.sievers@vrfy.org authored
kobject_add() and kobject_del() don't emit hotplug events anymore. Do it ourselves if we are finished populating the device directory. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
kay.sievers@vrfy.org authored
kobject_add() and kobject_del() don't emit hotplug events anymore. Do it ourselves if we are finished populating the device directory. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
kay.sievers@vrfy.org authored
kobject_add() and kobject_del() don't emit hotplug events anymore. Do it ourselves if we are finished populating the device directory. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-