Commit ab706391 authored by Andrew Morton's avatar Andrew Morton Committed by Richard Henderson

[PATCH] simplify and generalise cond_resched_lock

cond_resched_lock() _used_ to be "if this is the only lock which I am holding
then drop it and schedule if needed".

However with the i_shared_lock->i_shared_sem change, neither of its two
callsites now need those semantics.  So this patch changes it to mean just
"if needed, drop this lock and reschedule".

This allows us to also schedule if CONFIG_PREEMPT=n, which is useful -
zap_page_range() can run for an awfully long time.

The preempt and non-preempt versions of cond_resched_lock() have been
unified.
parent d9be9136
......@@ -724,19 +724,17 @@ static inline void cond_resched(void)
__cond_resched();
}
#ifdef CONFIG_PREEMPT
/*
* cond_resched_lock() - if a reschedule is pending, drop the given lock,
* call schedule, and on return reacquire the lock.
*
* Note: this does not assume the given lock is the _only_ lock held.
* The kernel preemption counter gives us "free" checking that we are
* atomic -- let's use it.
* This works OK both with and without CONFIG_PREEMPT. We do strange low-level
* operations here to prevent schedule() from being called twice (once via
* spin_unlock(), once by hand).
*/
static inline void cond_resched_lock(spinlock_t * lock)
{
if (need_resched() && preempt_count() == 1) {
if (need_resched()) {
_raw_spin_unlock(lock);
preempt_enable_no_resched();
__cond_resched();
......@@ -744,14 +742,6 @@ static inline void cond_resched_lock(spinlock_t * lock)
}
}
#else
static inline void cond_resched_lock(spinlock_t * lock)
{
}
#endif
/* Reevaluate whether the task has signals pending delivery.
This is required every time the blocked sigset_t changes.
callers must hold sig->siglock. */
......
......@@ -489,6 +489,8 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned
mmu_gather_t *tlb;
unsigned long end, block;
might_sleep();
if (is_vm_hugetlb_page(vma)) {
zap_hugepage_range(vma, address, size);
return;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment