1. 19 Dec, 2012 24 commits
  2. 18 Dec, 2012 16 commits
    • Linus Torvalds's avatar
      Merge branch 'akpm' (more patches from Andrew) · 673ab878
      Linus Torvalds authored
      Merge patches from Andrew Morton:
       "Most of the rest of MM, plus a few dribs and drabs.
      
        I still have quite a few irritating patches left around: ones with
        dubious testing results, lack of review, ones which should have gone
        via maintainer trees but the maintainers are slack, etc.
      
        I need to be more activist in getting these things wrapped up outside
        the merge window, but they're such a PITA."
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (48 commits)
        mm/vmscan.c: avoid possible deadlock caused by too_many_isolated()
        vmscan: comment too_many_isolated()
        mm/kmemleak.c: remove obsolete simple_strtoul
        mm/memory_hotplug.c: improve comments
        mm/hugetlb: create hugetlb cgroup file in hugetlb_init
        mm/mprotect.c: coding-style cleanups
        Documentation: ABI: /sys/devices/system/node/
        slub: drop mutex before deleting sysfs entry
        memcg: add comments clarifying aspects of cache attribute propagation
        kmem: add slab-specific documentation about the kmem controller
        slub: slub-specific propagation changes
        slab: propagate tunable values
        memcg: aggregate memcg cache values in slabinfo
        memcg/sl[au]b: shrink dead caches
        memcg/sl[au]b: track all the memcg children of a kmem_cache
        memcg: destroy memcg caches
        sl[au]b: allocate objects from memcg cache
        sl[au]b: always get the cache from its page in kmem_cache_free()
        memcg: skip memcg kmem allocations in specified code regions
        memcg: infrastructure to match an allocation to the right cache
        ...
      673ab878
    • Linus Torvalds's avatar
      Merge tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging · d7b96ca5
      Linus Torvalds authored
      Pull hwmon fixlet from Guenter Roeck:
       "Fix fallout from __devexit removal"
      
      * tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
        hwmon: (twl4030-madc-hwmon) Fix warning message caused by removal of __devexit
      d7b96ca5
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile · 5b3040a4
      Linus Torvalds authored
      Pull tile updates from Chris Metcalf:
       "These are a smattering of minor changes from Tilera and other folks,
        mostly in the ptrace area."
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
        arch/tile: set CORE_DUMP_USE_REGSET on tile
        arch/tile: implement arch_ptrace using user_regset on tile
        arch/tile: implement user_regset interface on tile
        arch/tile: clean up tile-specific PTRACE_SETOPTIONS
        arch/tile: provide PT_FLAGS_COMPAT value in pt_regs
        tile/PCI: use for_each_pci_dev to simplify the code
        tilegx: remove __init from pci fixup hook
      5b3040a4
    • Fengguang Wu's avatar
      mm/vmscan.c: avoid possible deadlock caused by too_many_isolated() · 3cf23841
      Fengguang Wu authored
      Neil found that if too_many_isolated() returns true while performing
      direct reclaim we can end up waiting for other threads to complete their
      direct reclaim.  If those threads are allowed to enter the FS or IO to
      free memory, but this thread is not, then it is possible that those
      threads will be waiting on this thread and so we get a circular deadlock.
      
      some task enters direct reclaim with GFP_KERNEL
        => too_many_isolated() false
          => vmscan and run into dirty pages
            => pageout()
              => take some FS lock
                => fs/block code does GFP_NOIO allocation
                  => enter direct reclaim again
                    => too_many_isolated() true
                      => waiting for others to progress, however the other
                         tasks may be circular waiting for the FS lock..
      
      The fix is to let !__GFP_IO and !__GFP_FS direct reclaims enjoy higher
      priority than normal ones, by lowering the throttle threshold for the
      latter.
      
      Allowing ~1/8 isolated pages in normal is large enough.  For example, for
      a 1GB LRU list, that's ~128MB isolated pages, or 1k blocked tasks (each
      isolates 32 4KB pages), or 64 blocked tasks per logical CPU (assuming 16
      logical CPUs per NUMA node).  So it's not likely some CPU goes idle
      waiting (when it could make progress) because of this limit: there are
      much more sleeping reclaim tasks than the number of CPU, so the task may
      well be blocked by some low level queue/lock anyway.
      
      Now !GFP_IOFS reclaims won't be waiting for GFP_IOFS reclaims to progress.
       They will be blocked only when there are too many concurrent !GFP_IOFS
      reclaims, however that's very unlikely because the IO-less direct reclaims
      is able to progress much more faster, and they won't deadlock each other.
      The threshold is raised high enough for them, so that there can be
      sufficient parallel progress of !GFP_IOFS reclaims.
      
      [akpm@linux-foundation.org: tweak comment]
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Cc: Torsten Kaiser <just.for.lkml@googlemail.com>
      Tested-by: default avatarNeilBrown <neilb@suse.de>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3cf23841
    • Fengguang Wu's avatar
      vmscan: comment too_many_isolated() · d37dd5dc
      Fengguang Wu authored
      Comment "Why it's doing so" rather than "What it does" as proposed by
      Andrew Morton.
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d37dd5dc
    • Abhijit Pawar's avatar
      mm/kmemleak.c: remove obsolete simple_strtoul · dc053733
      Abhijit Pawar authored
      Replace the obsolete simple_strtoul() with kstrtoul().
      Signed-off-by: default avatarAbhijit Pawar <abhi.c.pawar@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dc053733
    • Tang Chen's avatar
      mm/memory_hotplug.c: improve comments · 79a4dcef
      Tang Chen authored
      Signed-off-by: default avatarTang Chen <tangchen@cn.fujitsu.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      79a4dcef
    • Jianguo Wu's avatar
      mm/hugetlb: create hugetlb cgroup file in hugetlb_init · 7179e7bf
      Jianguo Wu authored
      Build kernel with CONFIG_HUGETLBFS=y,CONFIG_HUGETLB_PAGE=y and
      CONFIG_CGROUP_HUGETLB=y, then specify hugepagesz=xx boot option, system
      will fail to boot.
      
      This failure is caused by following code path:
      
        setup_hugepagesz
          hugetlb_add_hstate
            hugetlb_cgroup_file_init
              cgroup_add_cftypes
                kzalloc <--slab is *not available* yet
      
      For this path, slab is not available yet, so memory allocated will be
      failed, and cause WARN_ON() in hugetlb_cgroup_file_init().
      
      So I move hugetlb_cgroup_file_init() into hugetlb_init().
      
      [akpm@linux-foundation.org: tweak coding-style, remove pointless __init on inlined function]
      [akpm@linux-foundation.org: fix warning]
      Signed-off-by: default avatarJianguo Wu <wujianguo@huawei.com>
      Signed-off-by: default avatarJiang Liu <jiang.liu@huawei.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7179e7bf
    • Andrew Morton's avatar
      mm/mprotect.c: coding-style cleanups · 7d12efae
      Andrew Morton authored
      A few gremlins have recently crept in.
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7d12efae
    • Davidlohr Bueso's avatar
      Documentation: ABI: /sys/devices/system/node/ · 5bbe1ec1
      Davidlohr Bueso authored
      Describe NUMA node sysfs files/attributes.
      
      Note that for the specific dates and contacts I couldn't find,
      I left it as default for Oct 2002 and linux-mm.
      Signed-off-by: default avatarDavidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5bbe1ec1
    • Glauber Costa's avatar
      slub: drop mutex before deleting sysfs entry · 5413dfba
      Glauber Costa authored
      Sasha Levin recently reported a lockdep problem resulting from the new
      attribute propagation introduced by kmemcg series.  In short, slab_mutex
      will be called from within the sysfs attribute store function.  This will
      create a dependency, that will later be held backwards when a cache is
      destroyed - since destruction occurs with the slab_mutex held, and then
      calls in to the sysfs directory removal function.
      
      In this patch, I propose to adopt a strategy close to what
      __kmem_cache_create does before calling sysfs_slab_add, and release the
      lock before the call to sysfs_slab_remove.  This is pretty much the last
      operation in the kmem_cache_shutdown() path, so we could do better by
      splitting this and moving this call alone to later on.  This will fit
      nicely when sysfs handling is consistent between all caches, but will look
      weird now.
      
      Lockdep info:
      
        ======================================================
        [ INFO: possible circular locking dependency detected ]
        3.7.0-rc4-next-20121106-sasha-00008-g353b62f #117 Tainted: G        W
        -------------------------------------------------------
        trinity-child13/6961 is trying to acquire lock:
         (s_active#43){++++.+}, at:  sysfs_addrm_finish+0x31/0x60
      
        but task is already holding lock:
         (slab_mutex){+.+.+.}, at:  kmem_cache_destroy+0x22/0xe0
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
        -> #1 (slab_mutex){+.+.+.}:
                lock_acquire+0x1aa/0x240
                __mutex_lock_common+0x59/0x5a0
                mutex_lock_nested+0x3f/0x50
                slab_attr_store+0xde/0x110
                sysfs_write_file+0xfa/0x150
                vfs_write+0xb0/0x180
                sys_pwrite64+0x60/0xb0
                tracesys+0xe1/0xe6
        -> #0 (s_active#43){++++.+}:
                __lock_acquire+0x14df/0x1ca0
                lock_acquire+0x1aa/0x240
                sysfs_deactivate+0x122/0x1a0
                sysfs_addrm_finish+0x31/0x60
                sysfs_remove_dir+0x89/0xd0
                kobject_del+0x16/0x40
                __kmem_cache_shutdown+0x40/0x60
                kmem_cache_destroy+0x40/0xe0
                mon_text_release+0x78/0xe0
                __fput+0x122/0x2d0
                ____fput+0x9/0x10
                task_work_run+0xbe/0x100
                do_exit+0x432/0xbd0
                do_group_exit+0x84/0xd0
                get_signal_to_deliver+0x81d/0x930
                do_signal+0x3a/0x950
                do_notify_resume+0x3e/0x90
                int_signal+0x12/0x17
      
        other info that might help us debug this:
      
         Possible unsafe locking scenario:
      
               CPU0                    CPU1
               ----                    ----
          lock(slab_mutex);
                                       lock(s_active#43);
                                       lock(slab_mutex);
          lock(s_active#43);
      
         *** DEADLOCK ***
      
        2 locks held by trinity-child13/6961:
         #0:  (mon_lock){+.+.+.}, at:  mon_text_release+0x25/0xe0
         #1:  (slab_mutex){+.+.+.}, at:  kmem_cache_destroy+0x22/0xe0
      
        stack backtrace:
        Pid: 6961, comm: trinity-child13 Tainted: G        W    3.7.0-rc4-next-20121106-sasha-00008-g353b62f #117
        Call Trace:
          print_circular_bug+0x1fb/0x20c
          __lock_acquire+0x14df/0x1ca0
          lock_acquire+0x1aa/0x240
          sysfs_deactivate+0x122/0x1a0
          sysfs_addrm_finish+0x31/0x60
          sysfs_remove_dir+0x89/0xd0
          kobject_del+0x16/0x40
          __kmem_cache_shutdown+0x40/0x60
          kmem_cache_destroy+0x40/0xe0
          mon_text_release+0x78/0xe0
          __fput+0x122/0x2d0
          ____fput+0x9/0x10
          task_work_run+0xbe/0x100
          do_exit+0x432/0xbd0
          do_group_exit+0x84/0xd0
          get_signal_to_deliver+0x81d/0x930
          do_signal+0x3a/0x950
          do_notify_resume+0x3e/0x90
          int_signal+0x12/0x17
      Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5413dfba
    • Glauber Costa's avatar
      memcg: add comments clarifying aspects of cache attribute propagation · ebe945c2
      Glauber Costa authored
      This patch clarifies two aspects of cache attribute propagation.
      
      First, the expected context for the for_each_memcg_cache macro in
      memcontrol.h.  The usages already in the codebase are safe.  In mm/slub.c,
      it is trivially safe because the lock is acquired right before the loop.
      In mm/slab.c, it is less so: the lock is acquired by an outer function a
      few steps back in the stack, so a VM_BUG_ON() is added to make sure it is
      indeed safe.
      
      A comment is also added to detail why we are returning the value of the
      parent cache and ignoring the children's when we propagate the attributes.
      Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebe945c2
    • Glauber Costa's avatar
      kmem: add slab-specific documentation about the kmem controller · 92e79349
      Glauber Costa authored
      Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Frederic Weisbecker <fweisbec@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: JoonSoo Kim <js1304@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suleiman Souhlal <suleiman@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      92e79349
    • Glauber Costa's avatar
      slub: slub-specific propagation changes · 107dab5c
      Glauber Costa authored
      SLUB allows us to tune a particular cache behavior with sysfs-based
      tunables.  When creating a new memcg cache copy, we'd like to preserve any
      tunables the parent cache already had.
      
      This can be done by tapping into the store attribute function provided by
      the allocator.  We of course don't need to mess with read-only fields.
      Since the attributes can have multiple types and are stored internally by
      sysfs, the best strategy is to issue a ->show() in the root cache, and
      then ->store() in the memcg cache.
      
      The drawback of that, is that sysfs can allocate up to a page in buffering
      for show(), that we are likely not to need, but also can't guarantee.  To
      avoid always allocating a page for that, we can update the caches at store
      time with the maximum attribute size ever stored to the root cache.  We
      will then get a buffer big enough to hold it.  The corolary to this, is
      that if no stores happened, nothing will be propagated.
      
      It can also happen that a root cache has its tunables updated during
      normal system operation.  In this case, we will propagate the change to
      all caches that are already active.
      
      [akpm@linux-foundation.org: tweak code to avoid __maybe_unused]
      Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Frederic Weisbecker <fweisbec@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: JoonSoo Kim <js1304@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suleiman Souhlal <suleiman@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      107dab5c
    • Glauber Costa's avatar
      slab: propagate tunable values · 943a451a
      Glauber Costa authored
      SLAB allows us to tune a particular cache behavior with tunables.  When
      creating a new memcg cache copy, we'd like to preserve any tunables the
      parent cache already had.
      
      This could be done by an explicit call to do_tune_cpucache() after the
      cache is created.  But this is not very convenient now that the caches are
      created from common code, since this function is SLAB-specific.
      
      Another method of doing that is taking advantage of the fact that
      do_tune_cpucache() is always called from enable_cpucache(), which is
      called at cache initialization.  We can just preset the values, and then
      things work as expected.
      
      It can also happen that a root cache has its tunables updated during
      normal system operation.  In this case, we will propagate the change to
      all caches that are already active.
      
      This change will require us to move the assignment of root_cache in
      memcg_params a bit earlier.  We need this to be already set - which
      memcg_kmem_register_cache will do - when we reach __kmem_cache_create()
      Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Frederic Weisbecker <fweisbec@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: JoonSoo Kim <js1304@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suleiman Souhlal <suleiman@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      943a451a
    • Glauber Costa's avatar
      memcg: aggregate memcg cache values in slabinfo · 749c5415
      Glauber Costa authored
      When we create caches in memcgs, we need to display their usage
      information somewhere.  We'll adopt a scheme similar to /proc/meminfo,
      with aggregate totals shown in the global file, and per-group information
      stored in the group itself.
      
      For the time being, only reads are allowed in the per-group cache.
      Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Frederic Weisbecker <fweisbec@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: JoonSoo Kim <js1304@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suleiman Souhlal <suleiman@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      749c5415