Commit ab706391 authored by Andrew Morton's avatar Andrew Morton Committed by Richard Henderson

[PATCH] simplify and generalise cond_resched_lock

cond_resched_lock() _used_ to be "if this is the only lock which I am holding
then drop it and schedule if needed".

However with the i_shared_lock->i_shared_sem change, neither of its two
callsites now need those semantics.  So this patch changes it to mean just
"if needed, drop this lock and reschedule".

This allows us to also schedule if CONFIG_PREEMPT=n, which is useful -
zap_page_range() can run for an awfully long time.

The preempt and non-preempt versions of cond_resched_lock() have been
unified.
parent d9be9136
...@@ -724,19 +724,17 @@ static inline void cond_resched(void) ...@@ -724,19 +724,17 @@ static inline void cond_resched(void)
__cond_resched(); __cond_resched();
} }
#ifdef CONFIG_PREEMPT
/* /*
* cond_resched_lock() - if a reschedule is pending, drop the given lock, * cond_resched_lock() - if a reschedule is pending, drop the given lock,
* call schedule, and on return reacquire the lock. * call schedule, and on return reacquire the lock.
* *
* Note: this does not assume the given lock is the _only_ lock held. * This works OK both with and without CONFIG_PREEMPT. We do strange low-level
* The kernel preemption counter gives us "free" checking that we are * operations here to prevent schedule() from being called twice (once via
* atomic -- let's use it. * spin_unlock(), once by hand).
*/ */
static inline void cond_resched_lock(spinlock_t * lock) static inline void cond_resched_lock(spinlock_t * lock)
{ {
if (need_resched() && preempt_count() == 1) { if (need_resched()) {
_raw_spin_unlock(lock); _raw_spin_unlock(lock);
preempt_enable_no_resched(); preempt_enable_no_resched();
__cond_resched(); __cond_resched();
...@@ -744,14 +742,6 @@ static inline void cond_resched_lock(spinlock_t * lock) ...@@ -744,14 +742,6 @@ static inline void cond_resched_lock(spinlock_t * lock)
} }
} }
#else
static inline void cond_resched_lock(spinlock_t * lock)
{
}
#endif
/* Reevaluate whether the task has signals pending delivery. /* Reevaluate whether the task has signals pending delivery.
This is required every time the blocked sigset_t changes. This is required every time the blocked sigset_t changes.
callers must hold sig->siglock. */ callers must hold sig->siglock. */
......
...@@ -489,6 +489,8 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned ...@@ -489,6 +489,8 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned
mmu_gather_t *tlb; mmu_gather_t *tlb;
unsigned long end, block; unsigned long end, block;
might_sleep();
if (is_vm_hugetlb_page(vma)) { if (is_vm_hugetlb_page(vma)) {
zap_hugepage_range(vma, address, size); zap_hugepage_range(vma, address, size);
return; return;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment