- 13 May, 2003 2 commits
-
-
bk://kernel.bkbits.net/davem/net-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
bk://linux-dj.bkbits.net/agpgartLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 14 May, 2003 1 commit
-
-
Dave Jones authored
into tetrachloride.(none):/mnt/raid/src/kernel/2.5/agpgart
-
- 13 May, 2003 16 commits
-
-
David S. Miller authored
into kernel.bkbits.net:/home/davem/net-2.5
-
David S. Miller authored
-
Andrew Morton authored
-
Andrew Morton authored
-
Chas Williams authored
-
Daniel McNeil authored
-
Linus Torvalds authored
Quite a few suspicious places here that pass kernel pointers to the internal ioctl engine.
-
Linus Torvalds authored
declarations.
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
Linus Torvalds authored
-
David S. Miller authored
Much help provided by Rusty Russell in fixing device leak and TOS modification handling bugs.
-
bk://kernel.bkbits.net/acme/net-2.5David S. Miller authored
into nuts.ninka.net:/home/davem/src/BK/net-2.5
-
Arnaldo Carvalho de Melo authored
-
Linus Torvalds authored
pointer. The integrated graphics AGP things don't have one.
-
- 12 May, 2003 21 commits
-
-
Dave Jones authored
into tetrachloride.(none):/mnt/raid/src/kernel/2.5/agpgart
-
Dave Jones authored
Closes bugzilla #646
-
http://linux-scsi.bkbits.net/scsi-for-linus-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
James Bottomley authored
into raven.il.steeleye.com:/home/jejb/BK/scsi-for-linus-2.5
-
Andrew Morton authored
From Alex Tomas We started using ext3_dx_readdir() for all dir_index filesystems, because we want to return entries in hash order always, so that readdir with a partial read + new entry added before next readdir won't be crazy. So we now need to free the structure at filp->pricate_data even against non-indexed directories.
-
Andrew Morton authored
Patch from "Theodore Ts'o" <tytso@mit.edu> We now use 0x7ffffff as the EOF cookie, because Linux NFS stupidly interprets the cookie (which is supposed to be a bag of bits without necessarily any semantic value) as a signed 64 bit integer, and then converts it to a unsigned integer, and then blows up if it cannot be expressed be expressed as a 32-bit value!! In order to do this, we have to fold the hash value 0x7ffffff into the hash value 0x7ffffffe. This is relatively safe; the only time we will lose if the directory contains filenames that hash to both 0x7ffffffe and 0x7fffffff (under the original hash), and the last directory entry which hashes to 0x7ffffffe is at the end of a leaf block, and the first directory entry which hashes to 0x7fffffff is at the beginning of a leaf block.
-
Andrew Morton authored
Patch from "Theodore Ts'o" <tytso@mit.edu> The following patch should (in theory) fix the htree/NFS readdir problems that people have reported. Specifically, it should fix the NFS looping on EOF problem with readdir, as well as the problems caused by coverting a directory to HTREE while an NFS readdir is in progress problem. I'd appreciate it if people who can easily replicate these NFS/htree problems could give this patch (against BK-recent / 2.5.63) a whirl. Thanks!!
-
Andrew Morton authored
From: John M Flinchbaugh <glynis@butterfly.hjsoft.com> this patch makes arch/i386/oprofile/init.c build.
-
Andrew Morton authored
From: Andreas Dilger <adilger@clusterfs.com> Below are the patches which reserve the Lustre EA index. The rest of the code is part of the Lustre tree, which isn't working with 2.5 yet.
-
Andrew Morton authored
From: William Lee Irwin III <wli@holomorphy.com> The new vmalloc() semantics from 2.5.32 had a race window. As things stand, the presence of a vm_area in the vmlist protects from allocators other than the owner examining the ptes in that area. This puts an ordering constraint on unmapping, so that allocators are required to unmap areas before removing them from the list or otherwise dropping the lock. Currently, unmap_vm_area() is done outside the lock and after the area is removed, which as we've seen from Felix von Leitner's test is oopsable. The following patch folds calls to unmap_vm_area() into remove_vm_area() to reinstate what are essentially the 2.4.x semantics of vfree(). This renders a number of unmap_vm_area() calls unnecessary (and in fact oopsable since they wipe ptes from later allocations). It's an open question as to whether this is sufficiently performant, but it is the minimally invasive approach. The more performant alternative is to provide the right API hooks to wipe the vmalloc() area clean before removing them from the list, using the ownership of the area to eliminate holding the vmlist_lock for the duration of the unmapping. If it proves to be necessary wli is on standby to implement it.
-
Andrew Morton authored
From: Manfred Spraul <manfred@colorfullife.com> de_thread calls list_del(¤t->tasks), but current->tasks was never added to the task list. The structure contains stale values from the parent. switch_exec_pid() transforms a normal thread to a thread group leader. Thread group leaders are included in the init_task.tasks linked list, non-leaders are not in that list. The patch adds the new thread group leader to the linked list, otherwise de_thread corrupts the task list.
-
Andrew Morton authored
Rather than assuming that all the things which copy_process() calls want to return -ENOMEM, correctly propagate the return values. This turns out to be a no-op at present.
-
Andrew Morton authored
People like to see when the emergency sync and emergency remount operations have completed.
-
Andrew Morton authored
From: Keith Mannthey <kmannth@us.ibm.com> The following is a patch to fix inconsistent use of the function set_ioapic_affinity. In the current kernel it is unclear as to weather the value being passed to the function is a cpu mask or valid apic id. In irq_affinity_write_proc the kernel passes on a cpu mask but the kirqd thread passes on logical apic ids. In flat apic mode this is not an issue because a cpu mask represents the apic value. However in clustered apic mode the cpu mask is very different from the logical apic id. This is an attempt to do the right thing for clustered apics. I clarify that the value being passed to set_ioapic_affinity is a cpu mask not a apicid. Set_ioapic_affinity will do the conversion to logical apic ids. Since many cpu masks don't map to valid apicids in clustered apic mode TARGET_CPUS is used as a default value when such a situation occurs. I think this is a good step in making irq_affinity clustered apic safe.
-
Andrew Morton authored
From: Andrey Panin <pazke@donpac.ru> attached patch fixes penguin with sgi framebuffer logo for visws subarch. It was broken in 2.5.68 IIRC.
-
Andrew Morton authored
From: Mingming Cao <cmm@us.ibm.com> Basically, freeary() is called with the spinlock for that semaphore set hold. But after the semaphore set is removed from the ID array by calling sem_rmid(), there is no lock to protect the waiting queue for that semaphore set. So, if a waiter is woken up by a signal (not by the wakeup from freeary()), it will check the q->status and q->prev fields. At that moment, freeary() may not have a chance to update those fields yet. static void freeary (int id) { ....... sma = sem_rmid(id); ...... /* Wake up all pending processes and let them fail with EIDRM.*/ for (q = sma->sem_pending; q; q = q->next) { q->status = -EIDRM; q->prev = NULL; wake_up_process(q->sleeper); /* doesn't sleep */ } sem_unlock(sma); ...... } So I propose move sem_rmid() after the loop of waking up every waiters. That could gurantee that when the waiters are woke up, the updates for q->status and q->prev have already done. Similar thing in message queue case. The patch is attached below. Comments are very welcomed. I have tested this patch on 2.5.68 kernel with LTP tests, seems fine to me. Paul, could you test this on DOTS test again? Thanks!
-
Andrew Morton authored
exit_mmap() currently assumes that the exitting task used virtual address span TASK_SIZE. But on some platforms, TASK_SIZE is variable, based on current->mm. But exit_mmap() can be called from (say) procfs's call to mmput. In which case current->mm has nothing to do with the mm which is being put in mmput(). So rather than assuming that the mm which is being put is current->mm, we need to calculate the virtual span of the mm. Add a new per-arch macro MM_VM_SIZE() for that. Some platforms can currently go BUG over this (where?). sparc64 is safe because our TASK_SIZE is constant. Platforms such as ia64 should stick the VM extent inside of mm_struct, I'd suggest adding it to mm_context_t. 1) TASK_SIZE means what is valid for mmap()'s in the processes address space 2) MM_VM_SIZE means where things might be mapped for a MM, including private implementation-specific areas created by the kernel which the user cannot access
-
Andrew Morton authored
From: Rusty Russell <rusty@rustcorp.com.au> __module_get is theoretically allowed on module inside init, since we already hold an implicit reference. Currently this BUG()s: make the reference count explicit, which also simplifies delete path. Also cleans up unload path, such that it only drops semaphore when it's actually sleeping for rmmod --wait.
-
Andrew Morton authored
From: Jan Kara <jack@suse.cz> I'm sending a fix which fixes potential problems (dropping references which were not acquired) when dquot_transfer() fails.
-
Andrew Morton authored
From: Jan Kara <jack@suse.cz> I'm sending a patch which changes numbers of blocks reserved for quota writes to more appropriate values (with current values ext3 asserts can be triggered).
-
Andrew Morton authored
From: Zwane Mwaikambo <zwane@linuxpower.ca> The proc interface has no way of telling wether there is an active cpufreq driver or not. This means that if you don't have a cpufreq supported processor, this will oops in various possible places.
-