1. 06 Feb, 2014 19 commits
  2. 29 Jan, 2014 10 commits
    • Greg Kroah-Hartman's avatar
      Linux 3.4.78 · a1322407
      Greg Kroah-Hartman authored
      a1322407
    • NeilBrown's avatar
      md/raid10: fix two bugs in handling of known-bad-blocks. · 7e34f43d
      NeilBrown authored
      commit b50c259e upstream.
      
      If we discover a bad block when reading we split the request and
      potentially read some of it from a different device.
      
      The code path of this has two bugs in RAID10.
      1/ we get a spin_lock with _irq, but unlock without _irq!!
      2/ The calculation of 'sectors_handled' is wrong, as can be clearly
         seen by comparison with raid1.c
      
      This leads to at least 2 warnings and a probable crash is a RAID10
      ever had known bad blocks.
      
      Fixes: 856e08e2Reported-by: default avatarDamian Nowak <spam@nowaker.net>
      URL: https://bugzilla.kernel.org/show_bug.cgi?id=68181Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7e34f43d
    • NeilBrown's avatar
      md/raid10: fix bug when raid10 recovery fails to recover a block. · 511375d1
      NeilBrown authored
      commit e8b84915 upstream.
      
      commit e875ecea
          md/raid10 record bad blocks as needed during recovery.
      
      added code to the "cannot recover this block" path to record a bad
      block rather than fail the whole recovery.
      Unfortunately this new case was placed *after* r10bio was freed rather
      than *before*, yet it still uses r10bio.
      This is will crash with a null dereference.
      
      So move the freeing of r10bio down where it is safe.
      
      Fixes: e875eceaReported-by: default avatarDamian Nowak <spam@nowaker.net>
      URL: https://bugzilla.kernel.org/show_bug.cgi?id=68181Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      511375d1
    • Andreas Rohner's avatar
      nilfs2: fix segctor bug that causes file system corruption · ddcb318f
      Andreas Rohner authored
      commit 70f2fe3a upstream.
      
      There is a bug in the function nilfs_segctor_collect, which results in
      active data being written to a segment, that is marked as clean.  It is
      possible, that this segment is selected for a later segment
      construction, whereby the old data is overwritten.
      
      The problem shows itself with the following kernel log message:
      
        nilfs_sufile_do_cancel_free: segment 6533 must be clean
      
      Usually a few hours later the file system gets corrupted:
      
        NILFS: bad btree node (blocknr=8748107): level = 0, flags = 0x0, nchildren = 0
        NILFS error (device sdc1): nilfs_bmap_last_key: broken bmap (inode number=114660)
      
      The issue can be reproduced with a file system that is nearly full and
      with the cleaner running, while some IO intensive task is running.
      Although it is quite hard to reproduce.
      
      This is what happens:
      
       1. The cleaner starts the segment construction
       2. nilfs_segctor_collect is called
       3. sc_stage is on NILFS_ST_SUFILE and segments are freed
       4. sc_stage is on NILFS_ST_DAT current segment is full
       5. nilfs_segctor_extend_segments is called, which
          allocates a new segment
       6. The new segment is one of the segments freed in step 3
       7. nilfs_sufile_cancel_freev is called and produces an error message
       8. Loop around and the collection starts again
       9. sc_stage is on NILFS_ST_SUFILE and segments are freed
          including the newly allocated segment, which will contain active
          data and can be allocated at a later time
      10. A few hours later another segment construction allocates the
          segment and causes file system corruption
      
      This can be prevented by simply reordering the statements.  If
      nilfs_sufile_cancel_freev is called before nilfs_segctor_extend_segments
      the freed segments are marked as dirty and cannot be allocated any more.
      Signed-off-by: default avatarAndreas Rohner <andreas.rohner@gmx.net>
      Reviewed-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Tested-by: default avatarAndreas Rohner <andreas.rohner@gmx.net>
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ddcb318f
    • Steven Rostedt's avatar
      SELinux: Fix possible NULL pointer dereference in selinux_inode_permission() · 9e74d93d
      Steven Rostedt authored
      commit 3dc91d43 upstream.
      
      While running stress tests on adding and deleting ftrace instances I hit
      this bug:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
        IP: selinux_inode_permission+0x85/0x160
        PGD 63681067 PUD 7ddbe067 PMD 0
        Oops: 0000 [#1] PREEMPT
        CPU: 0 PID: 5634 Comm: ftrace-test-mki Not tainted 3.13.0-rc4-test-00033-gd2a6dde-dirty #20
        Hardware name:                  /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006
        task: ffff880078375800 ti: ffff88007ddb0000 task.ti: ffff88007ddb0000
        RIP: 0010:[<ffffffff812d8bc5>]  [<ffffffff812d8bc5>] selinux_inode_permission+0x85/0x160
        RSP: 0018:ffff88007ddb1c48  EFLAGS: 00010246
        RAX: 0000000000000000 RBX: 0000000000800000 RCX: ffff88006dd43840
        RDX: 0000000000000001 RSI: 0000000000000081 RDI: ffff88006ee46000
        RBP: ffff88007ddb1c88 R08: 0000000000000000 R09: ffff88007ddb1c54
        R10: 6e6576652f6f6f66 R11: 0000000000000003 R12: 0000000000000000
        R13: 0000000000000081 R14: ffff88006ee46000 R15: 0000000000000000
        FS:  00007f217b5b6700(0000) GS:ffffffff81e21000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
        CR2: 0000000000000020 CR3: 000000006a0fe000 CR4: 00000000000007f0
        Call Trace:
          security_inode_permission+0x1c/0x30
          __inode_permission+0x41/0xa0
          inode_permission+0x18/0x50
          link_path_walk+0x66/0x920
          path_openat+0xa6/0x6c0
          do_filp_open+0x43/0xa0
          do_sys_open+0x146/0x240
          SyS_open+0x1e/0x20
          system_call_fastpath+0x16/0x1b
        Code: 84 a1 00 00 00 81 e3 00 20 00 00 89 d8 83 c8 02 40 f6 c6 04 0f 45 d8 40 f6 c6 08 74 71 80 cf 02 49 8b 46 38 4c 8d 4d cc 45 31 c0 <0f> b7 50 20 8b 70 1c 48 8b 41 70 89 d9 8b 78 04 e8 36 cf ff ff
        RIP  selinux_inode_permission+0x85/0x160
        CR2: 0000000000000020
      
      Investigating, I found that the inode->i_security was NULL, and the
      dereference of it caused the oops.
      
      in selinux_inode_permission():
      
      	isec = inode->i_security;
      
      	rc = avc_has_perm_noaudit(sid, isec->sid, isec->sclass, perms, 0, &avd);
      
      Note, the crash came from stressing the deletion and reading of debugfs
      files.  I was not able to recreate this via normal files.  But I'm not
      sure they are safe.  It may just be that the race window is much harder
      to hit.
      
      What seems to have happened (and what I have traced), is the file is
      being opened at the same time the file or directory is being deleted.
      As the dentry and inode locks are not held during the path walk, nor is
      the inodes ref counts being incremented, there is nothing saving these
      structures from being discarded except for an rcu_read_lock().
      
      The rcu_read_lock() protects against freeing of the inode, but it does
      not protect freeing of the inode_security_struct.  Now if the freeing of
      the i_security happens with a call_rcu(), and the i_security field of
      the inode is not changed (it gets freed as the inode gets freed) then
      there will be no issue here.  (Linus Torvalds suggested not setting the
      field to NULL such that we do not need to check if it is NULL in the
      permission check).
      
      Note, this is a hack, but it fixes the problem at hand.  A real fix is
      to restructure the destroy_inode() to call all the destructor handlers
      from the RCU callback.  But that is a major job to do, and requires a
      lot of work.  For now, we just band-aid this bug with this fix (it
      works), and work on a more maintainable solution in the future.
      
      Link: http://lkml.kernel.org/r/20140109101932.0508dec7@gandalf.local.home
      Link: http://lkml.kernel.org/r/20140109182756.17abaaa8@gandalf.local.homeSigned-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9e74d93d
    • Jean Delvare's avatar
      hwmon: (coretemp) Fix truncated name of alarm attributes · e34cdde4
      Jean Delvare authored
      commit 3f9aec76 upstream.
      
      When the core number exceeds 9, the size of the buffer storing the
      alarm attribute name is insufficient and the attribute name is
      truncated. This causes libsensors to skip these attributes as the
      truncated name is not recognized.
      Reported-by: default avatarAndreas Hollmann <hollmann@in.tum.de>
      Signed-off-by: default avatarJean Delvare <khali@linux-fr.org>
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e34cdde4
    • Jianguo Wu's avatar
      mm/memory-failure.c: recheck PageHuge() after hugetlb page migrate successfully · 9a22404c
      Jianguo Wu authored
      commit a49ecbcd upstream.
      
      After a successful hugetlb page migration by soft offline, the source
      page will either be freed into hugepage_freelists or buddy(over-commit
      page).  If page is in buddy, page_hstate(page) will be NULL.  It will
      hit a NULL pointer dereference in dequeue_hwpoisoned_huge_page().
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000058
        IP: [<ffffffff81163761>] dequeue_hwpoisoned_huge_page+0x131/0x1d0
        PGD c23762067 PUD c24be2067 PMD 0
        Oops: 0000 [#1] SMP
      
      So check PageHuge(page) after call migrate_pages() successfully.
      
      [wujg: backport to 3.4:
       - adjust context
       - s/num_poisoned_pages/mce_bad_pages/]
      Signed-off-by: default avatarJianguo Wu <wujianguo@huawei.com>
      Tested-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      9a22404c
    • Robert Richter's avatar
      perf/x86/amd/ibs: Fix waking up from S3 for AMD family 10h · 7e99f216
      Robert Richter authored
      commit bee09ed9 upstream.
      
      On AMD family 10h we see following error messages while waking up from
      S3 for all non-boot CPUs leading to a failed IBS initialization:
      
       Enabling non-boot CPUs ...
       smpboot: Booting Node 0 Processor 1 APIC 0x1
       [Firmware Bug]: cpu 1, try to use APIC500 (LVT offset 0) for vector 0x400, but the register is already in use for vector 0xf9 on another cpu
       perf: IBS APIC setup failed on cpu #1
       process: Switch to broadcast mode on CPU1
       CPU1 is up
       ...
       ACPI: Waking up from system sleep state S3
      
      Reason for this is that during suspend the LVT offset for the IBS
      vector gets lost and needs to be reinialized while resuming.
      
      The offset is read from the IBSCTL msr. On family 10h the offset needs
      to be 1 as offset 0 is used for the MCE threshold interrupt, but
      firmware assings it for IBS to 0 too. The kernel needs to reprogram
      the vector. The msr is a readonly node msr, but a new value can be
      written via pci config space access. The reinitialization is
      implemented for family 10h in setup_ibs_ctl() which is forced during
      IBS setup.
      
      This patch fixes IBS setup after waking up from S3 by adding
      resume/supend hooks for the boot cpu which does the offset
      reinitialization.
      
      Marking it as stable to let distros pick up this fix.
      Signed-off-by: default avatarRobert Richter <rric@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1389797849-5565-1-git-send-email-rric.net@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7e99f216
    • Ian Abbott's avatar
      staging: comedi: 8255_pci: fix for newer PCI-DIO48H · 6013932c
      Ian Abbott authored
      commit 0283f7a1 upstream.
      
      At some point, Measurement Computing / ComputerBoards redesigned the
      PCI-DIO48H to use a PLX PCI interface chip instead of an AMCC chip.
      This meant they had to put their hardware registers in the PCI BAR 2
      region instead of PCI BAR 1.  Unfortunately, they kept the same PCI
      device ID for the new design.  This means the driver recognizes the
      newer cards, but doesn't work (and is likely to screw up the local
      configuration registers of the PLX chip) because it's using the wrong
      region.
      
      Since  the PCI subvendor and subdevice IDs were both zero on the old
      design, but are the same as the vendor and device on the new design, we
      can tell the old design and new design apart easily enough.  Split the
      existing entry for the PCI-DIO48H in `pci_8255_boards[]` into two new
      entries, referenced by different entries in the PCI device ID table
      `pci_8255_pci_table[]`.  Use the same board name for both entries.
      Signed-off-by: default avatarIan Abbott <abbotti@mev.co.uk>
      Acked-by: default avatarH Hartley Sweeten <hsweeten@visionengravers.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6013932c
    • Andy Honig's avatar
      KVM: x86: Convert vapic synchronization to _cached functions (CVE-2013-6368) · 777f8f3b
      Andy Honig authored
      commit fda4e2e8 upstream.
      
      In kvm_lapic_sync_from_vapic and kvm_lapic_sync_to_vapic there is the
      potential to corrupt kernel memory if userspace provides an address that
      is at the end of a page.  This patches concerts those functions to use
      kvm_write_guest_cached and kvm_read_guest_cached.  It also checks the
      vapic_address specified by userspace during ioctl processing and returns
      an error to userspace if the address is not a valid GPA.
      
      This is generally not guest triggerable, because the required write is
      done by firmware that runs before the guest.  Also, it only affects AMD
      processors and oldish Intel that do not have the FlexPriority feature
      (unless you disable FlexPriority, of course; then newer processors are
      also affected).
      
      Fixes: b93463aa ('KVM: Accelerated apic support')
      Reported-by: default avatarAndrew Honig <ahonig@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAndrew Honig <ahonig@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      [ lizf: backported to 3.4: based on Paolo's backport hints for <3.10 ]
      Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      777f8f3b
  3. 15 Jan, 2014 11 commits