- 06 Jul, 2011 1 commit
-
-
Linus Walleij authored
The PB11MPCore reports "3" DTCM banks, but anything above 2 is an "undefined" value, so push this to become 0. Further add some checks if code is compiled to TCM even if there is no D/ITCM present in the system, and if we can really fit the compiled code. We don't do the BUG() since it's not helpful, it's better to deal with non-present TCM dynamically. If there is nothing compiled to the TCM and no TCM is detected, it will now just shut up even if TCM support is enabled. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 29 Jun, 2011 5 commits
-
-
Mark Rutland authored
This patch adds support for platform_device_id tables, allowing new PMU types to be registered with the correct type, without requiring new platform_driver shims to provide the type. An single entry for existing devices is provided. Macros matching functionality of the of_device_id table macros are provided for convenience. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Mark Rutland authored
This is based on an earlier patch from Rob Herring <rob.herring@calxeda.com> > Add OF match table to enable OF style driver binding. The dts entry is like > this: > > pmu { > compatible = "arm,cortex-a9-pmu"; > interrupts = <100 101>; > }; > > The use of pdev->id as an index breaks with OF device binding, so set the type > based on the OF compatible string. This modification sets the PMU hardware type based on data embedded in the binding, allowing easy addition of new PMU types in future. Support for new PMU types not provided by devicetree can be added later using platform_device_id tables in a similar fashion. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Mark Rutland authored
Currently, the PMU reservation framework allows for multiple PMUs of the same type to register themselves. This can lead to a bug with the sequence: register_pmu(pmu1); reserve_pmu(pmu_type); register_pmu(pmu2); release_pmu(pmu1); Here, pmu1 cannot be released, and pmu2 cannot be reserved. This patch modifies register_pmu to reject registrations where a PMU is already present, preventing this problem. PMUs which can have multiple instances should not use the PMU reservation framework. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Mark Rutland authored
Currently, PMU platform_device reservation relies on some minor abuse of the platform_device::id field for determining the type of PMU. This is problematic for device tree based probing, where the ID cannot be controlled. This patch removes reliance on the id field, and depends on each PMU's platform driver to figure out which type it is. As all PMUs handled by the current platform_driver name "arm-pmu" are CPU PMUs, this convention is hardcoded. New PMU types can be supported through the use of {of,platform}_device_id tables Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Simon Horman authored
This allows a ROM-able zImage to be written to eSD and for SuperH Mobile ARM to boot directly from the SDHI hardware block. This is achieved by the MaskROM loading the first portion of the image into MERAM and then jumping to it. This portion contains loader code which copies the entire image to SDRAM and jumps to it. From there the zImage boot code proceeds as normal, uncompressing the image into its final location and then jumping to it. Cc: Paul Mundt <lethal@linux-sh.org> Acked-by: Magnus Damm <magnus.damm@gmail.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 02 Jun, 2011 1 commit
-
-
Russell King authored
Allow SoCs to enable the scatterlist chaining support, which allows scatterlist tables to be broken up into smaller allocations. As support for this feature depends on the implementation details of the users of the scatterlists, we can't enable this globally without auditing all the users, which is a very big task. Instead, let SoCs progressively switch over to using this. SoC drivers using scatterlists and SoC DMA implementations need auditing before this option can be enabled for the SoC. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 01 Jun, 2011 17 commits
-
-
Linus Torvalds authored
This reverts commit a197b59a. As rmk says: "Commit a197b59a (mm: fail GFP_DMA allocations when ZONE_DMA is not configured) is causing regressions on ARM with various drivers which use GFP_DMA. The behaviour up until now has been to silently ignore that flag when CONFIG_ZONE_DMA is not enabled, and to allocate from the normal zone. However, as a result of the above commit, such allocations now fail which causes drivers to fail. These are regressions compared to the previous kernel version." so just revert it. Requested-by: Russell King <linux@arm.linux.org.uk> Acked-by: Andrew Morton <akpm@linux-foundation.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.infradead.org/iommu-2.6Linus Torvalds authored
* git://git.infradead.org/iommu-2.6: intel-iommu: Fix off-by-one in RMRR setup intel-iommu: Add domain check in domain_remove_one_dev_info intel-iommu: Remove Host Bridge devices from identity mapping intel-iommu: Use coherent DMA mask when requested intel-iommu: Dont cache iova above 32bit intel-iommu: Speed up processing of the identity_mapping function intel-iommu: Check for identity mapping candidate using system dma mask intel-iommu: Only unlink device domains from iommu intel-iommu: Enable super page (2MiB, 1GiB, etc.) support intel-iommu: Flush unmaps at domain_exit intel-iommu: Remove obsolete comment from detect_intel_iommu intel-iommu: fix VT-d PMR disable for TXT on S3 resume
-
Linus Torvalds authored
Jens' back-merge commit 698567f3 ("Merge commit 'v2.6.39' into for-2.6.40/core") was incorrectly done, and re-introduced the DISK_EVENT_MEDIA_CHANGE lines that had been removed earlier in commits - 9fd097b1 ("block: unexport DISK_EVENT_MEDIA_CHANGE for legacy/fringe drivers") - 7eec77a1 ("ide: unexport DISK_EVENT_MEDIA_CHANGE for ide-gd and ide-cd") because of conflicts with the "g->flags" updates near-by by commit d4dc210f ("block: don't block events on excl write for non-optical devices") As a result, we re-introduced the hanging behavior due to infinite disk media change reports. Tssk, tssk, people! Don't do back-merges at all, and *definitely* don't do them to hide merge conflicts from me - especially as I'm likely better at merging them than you are, since I do so many merges. Reported-by: Steven Rostedt <rostedt@goodmis.org> Cc: Jens Axboe <jaxboe@fusionio.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.infradead.org/mtd-2.6Linus Torvalds authored
* git://git.infradead.org/mtd-2.6: mtd: fix physmap.h warnings
-
David Woodhouse authored
We were mapping an extra byte (and hence usually an extra page): iommu_prepare_identity_map() expects to be given an 'end' argument which is the last byte to be mapped; not the first byte *not* to be mapped. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Mike Habeck authored
The comment in domain_remove_one_dev_info() states "No need to compare PCI domain; it has to be the same". But for the si_domain that isn't going to be true, as it consists of all the PCI devices that are identity mapped thus multiple PCI domains can be in si_domain. The code needs to validate the PCI domain too. Signed-off-by: Mike Habeck <habeck@sgi.com> Signed-off-by: Mike Travis <travis@sgi.com> Cc: stable@kernel.org Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Mike Travis authored
When using the 1:1 (identity) PCI DMA remapping, PCI Host Bridge devices that do not use the IOMMU causes a kernel panic. Fix that by not inserting those devices into the si_domain. Signed-off-by: Mike Travis <travis@sgi.com> Reviewed-by: Mike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Mike Travis authored
The __intel_map_single function is not honoring the passed in DMA mask. This results in not using the coherent DMA mask when called from intel_alloc_coherent(). Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Chris Wright <chrisw@sous-sol.org> Reviewed-by: Mike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Chris Wright authored
Mike Travis and Mike Habeck reported an issue where iova allocation would return a range that was larger than a device's dma mask. https://lkml.org/lkml/2011/3/29/423 The dmar initialization code will reserve all PCI MMIO regions and copy those reservations into a domain specific iova tree. It is possible for one of those regions to be above the dma mask of a device. It is typical to allocate iovas with a 32bit mask (despite device's dma mask possibly being larger) and cache the result until it exhausts the lower 32bit address space. Freeing the iova range that is >= the last iova in the lower 32bit range when there is still an iova above the 32bit range will corrupt the cached iova by pointing it to a region that is above 32bit. If that region is also larger than the device's dma mask, a subsequent allocation will return an unusable iova and cause dma failure. Simply don't cache an iova that is above the 32bit caching boundary. Reported-by: Mike Travis <travis@sgi.com> Reported-by: Mike Habeck <habeck@sgi.com> Cc: stable@kernel.org Acked-by: Mike Travis <travis@sgi.com> Tested-by: Mike Habeck <habeck@sgi.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Mike Travis authored
When there are a large count of PCI devices, and the pass through option for iommu is set, much time is spent in the identity_mapping function hunting though the iommu domains to check if a specific device is "identity mapped". Speed up the function by checking the cached info to see if it's mapped to the static identity domain. Signed-off-by: Mike Travis <travis@sgi.com> Reviewed-by: Mike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Chris Wright authored
The identity mapping code appears to make the assumption that if the devices dma_mask is greater than 32bits the device can use identity mapping. But that is not true: take the case where we have a 40bit device in a 44bit architecture. The device can potentially receive a physical address that it will truncate and cause incorrect addresses to be used. Instead check to see if the device's dma_mask is large enough to address the system's dma_mask. Signed-off-by: Mike Travis <travis@sgi.com> Reviewed-by: Mike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Alex Williamson authored
Commit a97590e5 added unlinking domains from iommus to reciprocate the iommu from domains unlinking that was already done. We actually want to only do this for device domains and never for the static identity map domain or VM domains. The SI domain is special and never freed, while VM domain->id lives in their own special address space, separate from iommu->domain_ids. In the current code, a VM can get domain->id zero, then mark that domain unused when unbound from pci-stub. This leads to DMAR write faults when the device is re-bound to the host driver. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Cc: stable@kernel.org Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Youquan Song authored
There are no externally-visible changes with this. In the loop in the internal __domain_mapping() function, we simply detect if we are mapping: - size >= 2MiB, and - virtual address aligned to 2MiB, and - physical address aligned to 2MiB, and - on hardware that supports superpages. (and likewise for larger superpages). We automatically use a superpage for such mappings. We never have to worry about *breaking* superpages, since we trust that we will always *unmap* the same range that was mapped. So all we need to do is ensure that dma_pte_clear_range() will also cope with superpages. Adjust pfn_to_dma_pte() to take a superpage 'level' as an argument, so it can return a PTE at the appropriate level rather than always extending the page tables all the way down to level 1. Again, this is simplified by the fact that we should never encounter existing small pages when we're creating a mapping; any old mapping that used the same virtual range will have been entirely removed and its obsolete page tables freed. Provide an 'intel_iommu=sp_off' argument on the command line as a chicken bit. Not that it should ever be required. == The original commit seen in the iommu-2.6.git was Youquan's implementation (and completion) of my own half-baked code which I'd typed into an email. Followed by half a dozen subsequent 'fixes'. I've taken the unusual step of rewriting history and collapsing the original commits in order to keep the main history simpler, and make life easier for the people who are going to have to backport this to older kernels. And also so I can give it a more coherent commit comment which (hopefully) gives a better explanation of what's going on. The original sequence of commits leading to identical code was: Youquan Song (3): intel-iommu: super page support intel-iommu: Fix superpage alignment calculation error intel-iommu: Fix superpage level calculation error in dma_pfn_level_pte() David Woodhouse (4): intel-iommu: Precalculate superpage support for dmar_domain intel-iommu: Fix hardware_largepage_caps() intel-iommu: Fix inappropriate use of superpages in __domain_mapping() intel-iommu: Fix phys_pfn in __domain_mapping for sglist pages Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Randy Dunlap authored
Fix build warnings in physmap.h: include/linux/mtd/physmap.h:25: warning: 'struct platform_device' declared inside parameter list include/linux/mtd/physmap.h:25: warning: its scope is only this definition or declaration, which is probably not what you want include/linux/mtd/physmap.h:26: warning: 'struct platform_device' declared inside parameter list include/linux/mtd/physmap.h:27: warning: 'struct platform_device' declared inside parameter list Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Linus Torvalds authored
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6: AppArmor: fix oops in apparmor_setprocattr
-
Mike Frysinger authored
The new instruction_pointer_set helper is defined for people who have converted to asm-generic/ptrace.h, so don't use it generally unless the arch needs it (in which case it has been converted). This should fix building of kgdb tests for arches not yet converted. Signed-off-by: Mike Frysinger <vapier@gentoo.org> Acked-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Kees Cook authored
When invalid parameters are passed to apparmor_setprocattr a NULL deref oops occurs when it tries to record an audit message. This is because it is passing NULL for the profile parameter for aa_audit. But aa_audit now requires that the profile passed is not NULL. Fix this by passing the current profile on the task that is trying to setprocattr. Signed-off-by: Kees Cook <kees@ubuntu.com> Signed-off-by: John Johansen <john.johansen@canonical.com> Cc: stable@kernel.org Signed-off-by: James Morris <jmorris@namei.org>
-
- 31 May, 2011 9 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linusLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: virtio_net: delay TX callbacks virtio: add api for delayed callbacks virtio_test: support event index vhost: support event index virtio_ring: support event idx feature virtio ring: inline function to check for events virtio: event index interface virtio: add full three-clause BSD text to headers. virtio balloon: kill tell-host-first logic virtio console: don't manually set or finalize VIRTIO_CONSOLE_F_MULTIPORT. drivers, block: virtio_blk: Replace cryptic number with the macro virtio_blk: allow re-reading config space at runtime lguest: remove support for VIRTIO_F_NOTIFY_ON_EMPTY. lguest: fix up compilation after move lguest: fix timer interrupt setup
-
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6Linus Torvalds authored
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] wire up sendmmsg() syscall for Itanium
-
Tony Luck authored
Add entries in unistd.h and entry.S to make this new syscall visible. Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Linus Torvalds authored
Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Fix mwait_play_dead() faulting on mwait-incapable cpus x86 idle: Fix mwait deprecation warning message Evil merge to remove extra quote noticed by Joe Perches
-
Linus Torvalds authored
Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: rcu: Cure load woes
-
Linus Torvalds authored
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Put back -pg to tsc.o and add no GCOV to vread_tsc_64.o
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6Linus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6: autofs4: bogus dentry_unhash() added in ->unlink() vfs: shrink_dcache_parent before rmdir, dir rename
-
Benjamin Herrenschmidt authored
The Apple custom PIC only exist in some earlier machine models, anything with an MPIC will crash on suspend if we register those syscore ops unconditionally. This is a regression caused by commit f5a592f7 ("PM / PowerPC: Use struct syscore_ops instead of sysdevs for PM") Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Peter Zijlstra authored
Commit cc3ce517 (rcu: Start RCU kthreads in TASK_INTERRUPTIBLE state) fudges a sleeping task' state, resulting in the scheduler seeing a TASK_UNINTERRUPTIBLE task going to sleep, but a TASK_INTERRUPTIBLE task waking up. The result is unbalanced load calculation. The problem that patch tried to address is that the RCU threads could stay in UNINTERRUPTIBLE state for quite a while and triggering the hung task detector due to on-demand wake-ups. Cure the problem differently by always giving the tasks at least one wake-up once the CPU is fully up and running, this will kick them out of the initial UNINTERRUPTIBLE state and into the regular INTERRUPTIBLE wait state. [ The alternative would be teaching kthread_create() to start threads as INTERRUPTIBLE but that needs a tad more thought. ] Reported-by: Damien Wyart <damien.wyart@free.fr> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Paul E. McKenney <paul.mckenney@linaro.org> Link: http://lkml.kernel.org/r/1306755291.1200.2872.camel@twinsSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
- 30 May, 2011 7 commits
-
-
Avi Kivity authored
A logic error in mwait_play_dead() causes the kernel to use mwait even on cpus which don't support it, such as KVM virtual cpus. Introduced by: 349c004e: x86: A fast way to check capabilities of the current cpu Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=36222Reported-by: Török Edwin <edwintorok@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com> Cc: Christoph Lameter <cl@linux.com> Cc: Tejun Heo <tj@kernel.org> Link: http://lkml.kernel.org/r/1306758237-9327-1-git-send-email-avi@redhat.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Borislav Petkov authored
Fix: arch/x86/kernel/process.c:645:1: warning: unknown escape sequence '\i' due to missing escape backslash, introduced by this commit: 5d4c47e0: x86 idle: deprecate mwait_idle() and "idle=mwait" cmdline param Signed-off-by: Borislav Petkov <bp@alien8.de> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1306748286-24701-1-git-send-email-bp@alien8.deSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Al Viro authored
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
Sage Weil authored
The dentry_unhash push-down series missed that shink_dcache_parent needs to be called prior to rmdir or dir rename to clear DCACHE_REFERENCED and allow efficient dentry reclaim. Reported-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Sage Weil <sage@newdream.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
Michael S. Tsirkin authored
Ask for delayed callbacks on TX ring full, to give the other side more of a chance to make progress. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Michael S. Tsirkin authored
Add an API that tells the other side that callbacks should be delayed until a lot of work has been done. Implement using the new event_idx feature. Note: it might seem advantageous to let the drivers ask for a callback after a specific capacity has been reached. However, as a single head can free many entries in the descriptor table, we don't really have a clue about capacity until get_buf is called. The API is the simplest to implement at the moment, we'll see what kind of hints drivers can pass when there's more than one user of the feature. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Michael S. Tsirkin authored
Add ability to test the new event idx feature, enable by default. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-