- 14 Aug, 2011 16 commits
-
-
Paul Turner authored
Account bandwidth usage on the cfs_rq level versus the task_groups to which they belong. Whether we are tracking bandwidth on a given cfs_rq is maintained under cfs_rq->runtime_enabled. cfs_rq's which belong to a bandwidth constrained task_group have their runtime accounted via the update_curr() path, which withdraws bandwidth from the global pool as desired. Updates involving the global pool are currently protected under cfs_bandwidth->lock, local runtime is protected by rq->lock. This patch only assigns and tracks quota, no action is taken in the case that cfs_rq->runtime_used exceeds cfs_rq->runtime_assigned. Signed-off-by: Paul Turner <pjt@google.com> Signed-off-by: Nikhil Rao <ncrao@google.com> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110721184757.179386821@google.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Paul Turner authored
Add constraints validation for CFS bandwidth hierarchies. Validate that: max(child bandwidth) <= parent_bandwidth In a quota limited hierarchy, an unconstrained entity (e.g. bandwidth==RUNTIME_INF) inherits the bandwidth of its parent. This constraint is chosen over sum(child_bandwidth) as notion of over-commit is valuable within SCHED_OTHER. Some basic code from the RT case is re-factored for reuse. Signed-off-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110721184757.083774572@google.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Paul Turner authored
In this patch we introduce the notion of CFS bandwidth, partitioned into globally unassigned bandwidth, and locally claimed bandwidth. - The global bandwidth is per task_group, it represents a pool of unclaimed bandwidth that cfs_rqs can allocate from. - The local bandwidth is tracked per-cfs_rq, this represents allotments from the global pool bandwidth assigned to a specific cpu. Bandwidth is managed via cgroupfs, adding two new interfaces to the cpu subsystem: - cpu.cfs_period_us : the bandwidth period in usecs - cpu.cfs_quota_us : the cpu bandwidth (in usecs) that this tg will be allowed to consume over period above. Signed-off-by: Paul Turner <pjt@google.com> Signed-off-by: Nikhil Rao <ncrao@google.com> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110721184756.972636699@google.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Paul Turner authored
Introduce hierarchical task accounting for the group scheduling case in CFS, as well as promoting the responsibility for maintaining rq->nr_running to the scheduling classes. The primary motivation for this is that with scheduling classes supporting bandwidth throttling it is possible for entities participating in throttled sub-trees to not have root visible changes in rq->nr_running across activate and de-activate operations. This in turn leads to incorrect idle and weight-per-task load balance decisions. This also allows us to make a small fixlet to the fastpath in pick_next_task() under group scheduling. Note: this issue also exists with the existing sched_rt throttling mechanism. This patch does not address that. Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110721184756.878333391@google.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Yong Zhang authored
Since [sched/cpupri: Remove the vec->lock], member pri_active of struct cpupri is not needed any more, just remove it. Also clean stuff related to it. Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110806001004.GA2207@zhySigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
[ This patch actually compiles. Thanks to Mike Galbraith for pointing that out. I compiled and booted this patch with no issues. ] Re-examining the cpupri patch, I see there's a possible race because the update of the two priorities vec->counts are not protected by a memory barrier. When a RT runqueue is overloaded and wants to push an RT task to another runqueue, it scans the RT priority vectors in a loop from lowest priority to highest. When we queue or dequeue an RT task that changes a runqueue's highest priority task, we update the vectors to show that a runqueue is rated at a different priority. To do this, we first set the new priority mask, and increment the vec->count, and then set the old priority mask by decrementing the vec->count. If we are lowering the runqueue's RT priority rating, it will trigger a RT pull, and we do not care if we miss pushing to this runqueue or not. But if we raise the priority, but the priority is still lower than an RT task that is looking to be pushed, we must make sure that this runqueue is still seen by the push algorithm (the loop). Because the loop reads from lowest to highest, and the new priority is set before the old one is cleared, we will either see the new or old priority set and the vector will be checked. But! Since there's no memory barrier between the updates of the two, the old count may be decremented first before the new count is incremented. This means the loop may see the old count of zero and skip it, and also the new count of zero before it was updated. A possible runqueue that the RT task could move to could be missed. A conditional memory barrier is placed between the vec->count updates and is only called when both updates are done. The smp_wmb() has also been changed to smp_mb__before_atomic_inc/dec(), as they are not needed by archs that already synchronize atomic_inc/dec(). The smp_rmb() has been moved to be called at every iteration of the loop so that the race between seeing the two updates is visible by each iteration of the loop, as an arch is free to optimize the reading of memory of the counters in the loop. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1312547269.18583.194.camel@gandalf.stny.rr.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
sched/cpupri: Remove the vec->lock The cpupri vec->lock has been showing up as a top contention lately. This is because of the RT push/pull logic takes an agressive approach for migrating RT tasks. The cpupri logic is in place to improve the performance of the push/pull when dealing with large number CPU machines. The problem though is a vec->lock is required, where a vec is a global per RT priority structure. That is, if there are lots of RT tasks at the same priority, every time they are added or removed from the RT queue, this global vec->lock is taken. Now that more kernel threads are becoming RT (RCU boost and threaded interrupts) this is becoming much more of an issue. There are two variables that are being synced by the vec->lock. The cpupri bitmask, and the vec->counter. The cpupri bitmask is one bit per priority. If a RT priority vec has a process queued, then the vec->count is > 0 and the cpupri bitmask is set for that RT priority. If the cpupri bitmask gets out of sync with the vec->counter, we could end up pushing a low proirity RT task to a high priority queue. That RT task that could have run immediately could be queued on a run queue with a higher priority task indefinitely. The solution is not to use the cpupri bitmask and just look at the vec->count directly when doing a pull. The cpupri bitmask is just a fast way to scan the RT priorities when a pull is made. Instead of using the bitmask, and just examine all RT priorities, and look at the vec->counts, we could eliminate the vec->lock. The scan of RT tasks is to find a run queue that we can push an RT task to, and we do not push to a high priority queue, thus the scan only needs to go from 1 to RT task->prio, and not all 100 RT priorities. The push algorithm, which does the scan of RT priorities (and scan of the bitmask) only happens when we have an overloaded RT run queue (more than one RT task queued). The grabbing of the vec->lock happens every time any RT task is queued or dequeued on the run queue for that priority. The slowing down of the scan by not using a bitmask is negligible by the speed up of removing the vec->lock contention, and replacing it with an atomic counter and memory barrier. To prove this, I wrote a patch that times both the loop and the code that grabs the vec->locks. I passed the patches to various people (and companies) to test and show the results. I let everyone choose their own load to test, giving different loads on the system, for various different setups. Here's some of the results: (snipping to a few CPUs to not make this change log huge, but the results were consistent across the entire system). System 1 (24 CPUs) Before patch: CPU: Name Count Max Min Average Total ---- ---- ----- --- --- ------- ----- [...] cpu 20: loop 3057 1.766 0.061 0.642 1963.170 vec 6782949 90.469 0.089 0.414 2811760.503 cpu 21: loop 2617 1.723 0.062 0.641 1679.074 vec 6782810 90.499 0.089 0.291 1978499.900 cpu 22: loop 2212 1.863 0.063 0.699 1547.160 vec 6767244 85.685 0.089 0.435 2949676.898 cpu 23: loop 2320 2.013 0.062 0.594 1380.265 vec 6781694 87.923 0.088 0.431 2928538.224 After patch: cpu 20: loop 2078 1.579 0.061 0.533 1108.006 vec 6164555 5.704 0.060 0.143 885185.809 cpu 21: loop 2268 1.712 0.065 0.575 1305.248 vec 6153376 5.558 0.060 0.187 1154960.469 cpu 22: loop 1542 1.639 0.095 0.533 823.249 vec 6156510 5.720 0.060 0.190 1172727.232 cpu 23: loop 1650 1.733 0.068 0.545 900.781 vec 6170784 5.533 0.060 0.167 1034287.953 All times are in microseconds. The 'loop' is the amount of time spent doing the loop across the priorities (before patch uses bitmask). the 'vec' is the amount of time in the code that requires grabbing the vec->lock. The second patch just does not have the vec lock, but encompasses the same code. Amazingly the loop code even went down on average. The vec code went from .5 down to .18, that's more than half the time spent! Note, more than one test was run, but they all had the same results. System 2 (64 CPUs) Before patch: CPU: Name Count Max Min Average Total ---- ---- ----- --- --- ------- ----- cpu 60: loop 0 0 0 0 0 vec 5410840 277.954 0.084 0.782 4232895.727 cpu 61: loop 0 0 0 0 0 vec 4915648 188.399 0.084 0.570 2803220.301 cpu 62: loop 0 0 0 0 0 vec 5356076 276.417 0.085 0.786 4214544.548 cpu 63: loop 0 0 0 0 0 vec 4891837 170.531 0.085 0.799 3910948.833 After patch: cpu 60: loop 0 0 0 0 0 vec 5365118 5.080 0.021 0.063 340490.267 cpu 61: loop 0 0 0 0 0 vec 4898590 1.757 0.019 0.071 347903.615 cpu 62: loop 0 0 0 0 0 vec 5737130 3.067 0.021 0.119 687108.734 cpu 63: loop 0 0 0 0 0 vec 4903228 1.822 0.021 0.071 348506.477 The test run during the measurement did not have any (very few, from other CPUs) RT tasks pushing. But this shows that it helped out tremendously with the contention, as the contention happens because the vec->lock is taken only on queuing at an RT priority, and different CPUs that queue tasks at the same priority will have contention. I tested on my own 4 CPU machine with the following results: Before patch: CPU: Name Count Max Min Average Total ---- ---- ----- --- --- ------- ----- cpu 0: loop 2377 1.489 0.158 0.588 1398.395 vec 4484 770.146 2.301 4.396 19711.755 cpu 1: loop 2169 1.962 0.160 0.576 1250.110 vec 4425 152.769 2.297 4.030 17834.228 cpu 2: loop 2324 1.749 0.155 0.559 1299.799 vec 4368 779.632 2.325 4.665 20379.268 cpu 3: loop 2325 1.629 0.157 0.561 1306.113 vec 4650 408.782 2.394 4.348 20222.577 After patch: CPU: Name Count Max Min Average Total ---- ---- ----- --- --- ------- ----- cpu 0: loop 2121 1.616 0.113 0.636 1349.189 vec 4303 1.151 0.225 0.421 1811.966 cpu 1: loop 2130 1.638 0.178 0.644 1372.927 vec 4627 1.379 0.235 0.428 1983.648 cpu 2: loop 2056 1.464 0.165 0.637 1310.141 vec 4471 1.311 0.217 0.433 1937.927 cpu 3: loop 2154 1.481 0.162 0.601 1295.083 vec 4236 1.253 0.230 0.425 1803.008 This was running my migrate.c code that can be found at: http://lwn.net/Articles/425763/ The migrate code does stress the RT tasks a bit. This shows that the loop did increase a little after the patch, but not by much. The vec code dropped dramatically. From 4.3us down to .42us. That's a 10x improvement! Tested-by: Mike Galbraith <mgalbraith@suse.de> Tested-by: Luis Claudio R. Gonçalves <lgoncalv@redhat.com> Tested-by: Matthew Hank Sabins<msabins@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Gregory Haskins <gregory.haskins@gmail.com> Acked-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Chris Mason <chris.mason@oracle.com> Link: http://lkml.kernel.org/r/1312317372.18583.101.camel@gandalf.stny.rr.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Hillf Danton proposed a patch (see link) that cleaned up the sched_rt code that calculates the priority of the next highest priority task to be used in finding run queues to pull from. His patch removed the calculating of the next prio to just use the current prio when deteriming if we should examine a run queue to pull from. The problem with his patch was that it caused more false checks. Because we check a run queue for pushable tasks if the current priority of that run queue is higher in priority than the task about to run on our run queue. But after grabbing the locks and doing the real check, we find that there may not be a task that has a higher prio task to pull. Thus the locks were taken with nothing to do. I added some trace_printks() to record when and how many times the run queue locks were taken to check for pullable tasks, compared to how many times we pulled a task. With the current method, it was: 3806 locks taken vs 2812 pulled tasks With Hillf's patch: 6728 locks taken vs 2804 pulled tasks The number of times locks were taken to pull a task went up almost double with no more success rate. But his patch did get me thinking. When we look at the priority of the highest task to consider taking the locks to do a pull, a failure to pull can be one of the following: (in order of most likely) o RT task was pushed off already between the check and taking the lock o Waiting RT task can not be migrated o RT task's CPU affinity does not include the target run queue's CPU o RT task's priority changed between the check and taking the lock And with Hillf's patch, the thing that caused most of the failures, is the RT task to pull was not at the right priority to pull (not greater than the current RT task priority on the target run queue). Most of the above cases we can't help. But the current method does not check if the next highest prio RT task can be migrated or not, and if it can not, we still grab the locks to do the test (we don't find out about this fact until after we have the locks). I thought about this case, and realized that the pushable task plist that is maintained only holds RT tasks that can migrate. If we move the calculating of the next highest prio task from the inc/dec_rt_task() functions into the queuing of the pushable tasks, then we only measure the priorities of those tasks that we push, and we get this basically for free. Not only does this patch make the code a little more efficient, it cleans it up and makes it a little simpler. Thanks to Hillf Danton for inspiring me on this patch. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Hillf Danton <dhillf@gmail.com> Cc: Gregory Haskins <ghaskins@novell.com> Link: http://lkml.kernel.org/r/BANLkTimQ67180HxCx5vgMqumqw1EkFh3qg@mail.gmail.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
When a new task is woken, the code to balance the RT task is currently skipped in the select_task_rq() call. But it will be pushed if the rq is currently overloaded with RT tasks anyway. The issue is that we already queued the task, and if it does get pushed, it will have to be dequeued and requeued on the new run queue. The advantage with pushing it first is that we avoid this requeuing as we are pushing it off before the task is ever queued. See commit 318e0893 ("sched: pre-route RT tasks on wakeup") for more details. The return of select_task_rq() when it is not a wake up has also been changed to return task_cpu() instead of smp_processor_id(). This is more of a sanity because the current only other user of select_task_rq() besides wake ups, is an exec, where task_cpu() should also be the same as smp_processor_id(). But if it is used for other purposes, lets keep the task on the same CPU. Why would we mant to migrate it to the current CPU? Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Hillf Danton <dhillf@gmail.com> Link: http://lkml.kernel.org/r/20110617015919.832743148@goodmis.orgSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Hillf Danton authored
There's no reason to clean the exec_start in put_prev_task_rt() as it is reset when the task gets back to the run queue. This saves us doing a store() in the fast path. Signed-off-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Yong Zhang <yong.zhang0@gmail.com> Link: http://lkml.kernel.org/r/BANLkTimqWD=q6YnSDi-v9y=LMWecgEzEWg@mail.gmail.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Hillf Danton authored
Do not call dequeue_pushable_task() when failing to push an eligible task, as it remains pushable, merely not at this particular moment. Signed-off-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Mike Galbraith <mgalbraith@gmx.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Yong Zhang <yong.zhang0@gmail.com> Link: http://lkml.kernel.org/r/1306895385.4791.26.camel@marge.simson.netSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Hillf Danton authored
Checking for the validity of sd is removed, since it is already checked by the for_each_domain macro. Signed-off-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/BANLkTimT+Tut-3TshCDm-NiLLXrOznibNA@mail.gmail.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Hillf Danton authored
When computing the next priority for a given run-queue, the check for RT priority of the task determined by the pick_next_highest_task_rt() function could be removed, since only RT tasks are returned by the function. Reviewed-by: Yong Zhang <yong.zhang0@gmail.com> Signed-off-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/BANLkTimxmWiof9s5AvS3v_0X+sMiE=0x5g@mail.gmail.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Mike Galbraith authored
Setting child->prio = current->normal_prio _after_ SCHED_RESET_ON_FORK has been handled for an RT parent gives birth to a deranged mutant child with non-RT policy, but RT prio and sched_class. Move PI leakage protection up, always set priorities and weight, and if the child is leaving RT class, reset rt_priority to the proper value. Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1311779695.8691.2.camel@marge.simson.netSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Yong Zhang authored
Remove the WAKEUP_PREEMPT feature, disabling it doesn't make any sense and its outlived its use by a long long while. Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Acked-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110729082033.GB12106@zhySigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Jan H. Schönherr authored
Since commit a2d47777 ("sched: fix stale value in average load per task") the variable rq->avg_load_per_task is no longer required. Remove it. Signed-off-by: Jan H. Schönherr <schnhrr@cs.tu-berlin.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1312189408-17172-1-git-send-email-schnhrr@cs.tu-berlin.deSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
- 12 Aug, 2011 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc: Don't do hypervisor calls on non-sun4v in DS driver.
-
David S. Miller authored
Reported-by: Pieter-Paul Giesberts <pieterpg@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Boaz Harrosh authored
Just like files-layout, blocks & objects layouts are part of the NFS 4.1 protocol and should be automatically selected if NFS_4_1 is selected. The small problem is that these depend on other Kernel support being present, while files only depends on NFS itself. This patch removes from the user choice the presence of objects and blocks layout. But makes sure these are selected only if the depended subsystems are present in the Kernel. Signed-off-by: Boaz Harrosh <bharrosh@panasas.com> Acked-by: Peng Tao <peng_tao@emc.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Eric Sandeen authored
Commit df5e6223 ("ext4: fix deadlock in ext4_symlink() in ENOSPC conditions") recalculated the number of credits needed for a long symlink, in the process of splitting it into two transactions. However, the first credit calculation under-counted because if selinux is enabled, credits are needed to create the selinux xattr as well. Overrunning the reservation will result in an OOPS in jbd2_journal_dirty_metadata() due to this assert: J_ASSERT_JH(jh, handle->h_buffer_credits > 0); Fix this by increasing the reservation size. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Jan Kara <jack@suse.cz> Acked-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Eric Sandeen authored
Commit ae54870a ("ext3: Fix lock inversion in ext3_symlink()") recalculated the number of credits needed for a long symlink, in the process of splitting it into two transactions. However, the first credit calculation under-counted because if selinux is enabled, credits are needed to create the selinux xattr as well. Overrunning the reservation will result in an OOPS in journal_dirty_metadata() due to this assert: J_ASSERT_JH(jh, handle->h_buffer_credits > 0); Fix this by increasing the reservation size. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Jan Kara <jack@suse.cz> Acked-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 11 Aug, 2011 6 commits
-
-
Vasiliy Kulikov authored
The patch http://lkml.org/lkml/2003/7/13/226 introduced an RLIMIT_NPROC check in set_user() to check for NPROC exceeding via setuid() and similar functions. Before the check there was a possibility to greatly exceed the allowed number of processes by an unprivileged user if the program relied on rlimit only. But the check created new security threat: many poorly written programs simply don't check setuid() return code and believe it cannot fail if executed with root privileges. So, the check is removed in this patch because of too often privilege escalations related to buggy programs. The NPROC can still be enforced in the common code flow of daemons spawning user processes. Most of daemons do fork()+setuid()+execve(). The check introduced in execve() (1) enforces the same limit as in setuid() and (2) doesn't create similar security issues. Neil Brown suggested to track what specific process has exceeded the limit by setting PF_NPROC_EXCEEDED process flag. With the change only this process would fail on execve(), and other processes' execve() behaviour is not changed. Solar Designer suggested to re-check whether NPROC limit is still exceeded at the moment of execve(). If the process was sleeping for days between set*uid() and execve(), and the NPROC counter step down under the limit, the defered execve() failure because NPROC limit was exceeded days ago would be unexpected. If the limit is not exceeded anymore, we clear the flag on successful calls to execve() and fork(). The flag is also cleared on successful calls to set_user() as the limit was exceeded for the previous user, not the current one. Similar check was introduced in -ow patches (without the process flag). v3 - clear PF_NPROC_EXCEEDED on successful calls to set_user(). Reviewed-by: James Morris <jmorris@namei.org> Signed-off-by: Vasiliy Kulikov <segoon@openwall.com> Acked-by: NeilBrown <neilb@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: perf symbols: Check '/tmp/perf-' symbol file ownership perf sched: Usage leftover from trace -> script rename perf sched: Do not delete session object prematurely perf tools: Check $HOME/.perfconfig ownership perf, x86: Add model 45 SandyBridge support perf tools: Add support to install perf python extension perf tools: do not look at ./config for configuration perf tools: Make clean leaves some files perf lock: Dropping unsupported ':r' modifier perf probe: Fix coredump introduced by probe module option jump label: Reduce the cycle count by changing the link order perf report: Use ui__warning in some more places perf python: Add PERF_RECORD_{LOST,READ,SAMPLE} routine tables perf evlist: Introduce 'disable' method trace events: Update version number reference to new 3.x scheme for EVENT_POWER_TRACING_DEPRECATED perf buildid-cache: Zero out buffer of filenames when adding/removing buildid
-
Tracey Dent authored
Change to new git tree - (git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git). Signed-off-by: Tracey Dent <tdent48227@gmail.com> Acked-by: WANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
This reverts commit af9d220b. It turns out that one was meant to be applied on top of the edac.git tree in -next that has more i7core_edac changes, but that wasn't clear in the original email. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Borislav Petkov <borislav.petkov@amd.com> Cc: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Peng Tao authored
PNFS_BLOCK needs BLK_DEV_DM/MD, which is not a dependency for other pnfs layout drivers. Seperate it out so others can still build when BLK_DEV_DM/MD is not enabled. Also change select to depends on to avoid build failures. Reported-and-tested-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Peng Tao <peng_tao@emc.com> Acked-by: Benny Halevy <bhalevy@tonian.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
* 'fixes' of master.kernel.org:/home/rmk/linux-2.6-arm: ARM: drop experimental status for ARM_PATCH_PHYS_VIRT ARM: 7008/1: alignment: Make SIGBUS sent to userspace POSIXly correct ARM: 7007/1: alignment: Prevent ignoring of faults with ARMv6 unaligned access model ARM: 7010/1: mm: fix invalid loop for poison_init_mem ARM: 7005/1: freshen up mm/proc-arm946.S dmaengine: PL08x: Fix trivial build error ARM: Fix build error for SMP=n builds
-
- 10 Aug, 2011 9 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpcLinus Torvalds authored
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: powerpc: Really fix build without CONFIG_PCI powerpc: Fix build without CONFIG_PCI powerpc/4xx: Fix build of PCI code on 405 powerpc/pseries: Simplify vpa deregistration functions powerpc/pseries: Cleanup VPA registration and deregistration errors powerpc/pseries: Fix kexec on recent firmware versions MAINTAINERS: change maintainership of mpc5xxx powerpc: Make KVM_GUEST default to n powerpc/kvm: Fix build errors with older toolchains powerpc: Lack of ibm,io-events not that important! powerpc: Move kdump default base address to half RMO size on 64bit powerpc/perf: Disable pagefaults during callchain stack read ppc: Remove duplicate definition of PV_POWER7 powerpc: pseries: Fix kexec on machines with more than 4TB of RAM powerpc: Jump label misalignment causes oops at boot powerpc: Clean up some panic messages in prom_init powerpc: Fix device tree claim code powerpc: Return the_cpu_ spec from identify_cpu powerpc: mtspr/mtmsr should take an unsigned long
-
git://git.kernel.org/pub/scm/linux/kernel/git/ecryptfs/ecryptfs-2.6Linus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ecryptfs/ecryptfs-2.6: Ecryptfs: Add mount option to check uid of device being mounted = expect uid eCryptfs: Fix payload_len unitialized variable warning eCryptfs: fix compile error eCryptfs: Return error when lower file pointer is NULL
-
Borislav Petkov authored
Both AMD and Intel i7 EDAC drivers use MCE features and are thus dependent of this functionality present in the kernel. Express this in Kconfig so that randconfig builds don't break. Reported-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Acked-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Benjamin Herrenschmidt authored
Brown paper bag day, previous commit wouldn't work very well with modules enabled. Move the exports into the ifdef. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Russell King authored
This has now been well tested, and several platforms are now selecting this directly. It's time to drop its experimental status. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Ingo Molnar authored
Merge branch 'perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
-
John Johansen authored
Close a TOCTOU race for mounts done via ecryptfs-mount-private. The mount source (device) can be raced when the ownership test is done in userspace. Provide Ecryptfs a means to force the uid check at mount time. Signed-off-by: John Johansen <john.johansen@canonical.com> Cc: <stable@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
-
Jonathan Nieder authored
syslog-ng versions before 3.3.0beta1 (2011-05-12) assume that CAP_SYS_ADMIN is sufficient to access syslog, so ever since CAP_SYSLOG was introduced (2010-11-25) they have triggered a warning. Commit ee24aebf ("cap_syslog: accept CAP_SYS_ADMIN for now") improved matters a little by making syslog-ng work again, just keeping the WARN_ONCE(). But still, this is a warning that writes a stack trace we don't care about to syslog, sets a taint flag, and alarms sysadmins when nothing worse has happened than use of an old userspace with a recent kernel. Convert the WARN_ONCE to a printk_once to avoid that while continuing to give userspace developers a hint that this is an unwanted backward-compatibility feature and won't be around forever. Reported-by: Ralf Hildebrandt <ralf.hildebrandt@charite.de> Reported-by: Niels <zorglub_olsen@hotmail.com> Reported-by: Paweł Sikora <pluto@agmk.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Liked-by: Gergely Nagy <algernon@madhouse-project.org> Acked-by: Serge Hallyn <serge@hallyn.com> Acked-by: James Morris <jmorris@namei.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Hocko authored
This reverts commit 8521fc50. The patch incorrectly assumes that using atomic FLUSHING_CACHED_CHARGE bit operations is sufficient but that is not true. Johannes Weiner has reported a crash during parallel memory cgroup removal: BUG: unable to handle kernel NULL pointer dereference at 0000000000000018 IP: [<ffffffff81083b70>] css_is_ancestor+0x20/0x70 Oops: 0000 [#1] PREEMPT SMP Pid: 19677, comm: rmdir Tainted: G W 3.0.0-mm1-00188-gf38d32b #35 ECS MCP61M-M3/MCP61M-M3 RIP: 0010:[<ffffffff81083b70>] css_is_ancestor+0x20/0x70 RSP: 0018:ffff880077b09c88 EFLAGS: 00010202 Process rmdir (pid: 19677, threadinfo ffff880077b08000, task ffff8800781bb310) Call Trace: [<ffffffff810feba3>] mem_cgroup_same_or_subtree+0x33/0x40 [<ffffffff810feccf>] drain_all_stock+0x11f/0x170 [<ffffffff81103211>] mem_cgroup_force_empty+0x231/0x6d0 [<ffffffff811036c4>] mem_cgroup_pre_destroy+0x14/0x20 [<ffffffff81080559>] cgroup_rmdir+0xb9/0x500 [<ffffffff81114d26>] vfs_rmdir+0x86/0xe0 [<ffffffff81114e7b>] do_rmdir+0xfb/0x110 [<ffffffff81114ea6>] sys_rmdir+0x16/0x20 [<ffffffff8154d76b>] system_call_fastpath+0x16/0x1b We are crashing because we try to dereference cached memcg when we are checking whether we should wait for draining on the cache. The cache is already cleaned up, though. There is also a theoretical chance that the cached memcg gets freed between we test for the FLUSHING_CACHED_CHARGE and dereference it in mem_cgroup_same_or_subtree: CPU0 CPU1 CPU2 mem=stock->cached stock->cached=NULL clear_bit test_and_set_bit test_bit() ... <preempted> mem_cgroup_destroy use after free The percpu_charge_mutex protected from this race because sync draining is exclusive. It is safer to revert now and come up with a more parallel implementation later. Signed-off-by: Michal Hocko <mhocko@suse.cz> Reported-by: Johannes Weiner <jweiner@redhat.com> Acked-by: Johannes Weiner <jweiner@redhat.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 09 Aug, 2011 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6Linus Torvalds authored
* 'slab/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: slub: Fix partial count comparison confusion
-
Tyler Hicks authored
fs/ecryptfs/keystore.c: In function ‘ecryptfs_generate_key_packet_set’: fs/ecryptfs/keystore.c:1991:28: warning: ‘payload_len’ may be used uninitialized in this function [-Wuninitialized] fs/ecryptfs/keystore.c:1976:9: note: ‘payload_len’ was declared here Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
-
Roberto Sassu authored
This patch fixes the compile error reported at the address: https://bugzilla.kernel.org/show_bug.cgi?id=40292 The problem arises when compiling eCryptfs as built-in and the 'encrypted' key type as a module. The patch prevents this combination from being set in the kernel configuration, by fixing the eCryptfs dependencies. Signed-off-by: Roberto Sassu <roberto.sassu@polito.it> Reported-by: David Hill <hilld@binarystorm.net> Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
-
Tyler Hicks authored
When an eCryptfs inode's lower file has been closed, and the pointer has been set to NULL, return an error when trying to do a lower read or write rather than calling BUG(). https://bugzilla.kernel.org/show_bug.cgi?id=37292Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com> Cc: <stable@kernel.org>
-