- 25 Sep, 2018 1 commit
-
-
Susobhan Dey authored
Signed-off-by: Susobhan Dey <susobhan.dey@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 17 Sep, 2018 1 commit
-
-
Hannes Reinecke authored
When issuing a short read on the ANA log page the number of groups should not change, even though the final returned data might contain less groups than that number. Signed-off-by: Hannes Reinecke <hare@suse.com> [switched to a for loop] Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 13 Sep, 2018 1 commit
-
-
Jens Axboe authored
The supported added for zones in null_blk seem to assume that only rq based operation is possible. But this depends on the queue_mode setting, if this is set to 0, then cmd->bio is what we need to be operating on. Right now any attempt to load null_blk with queue_mode=0 will insta-crash, since cmd->rq is NULL and null_handle_cmd() assumes it to always be set. Make the zoned code deal with bio's instead, or pass in the appropriate sector/nr_sectors instead. Fixes: ca4b2a01 ("null_blk: add zone support") Tested-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 11 Sep, 2018 1 commit
-
-
Jens Axboe authored
After merging the iolatency policy, we potentially now have 4 policies being registered, but only support 3. This causes one of them to fail loading. Takashi reports that BFQ no longer works for him, because it fails to load due to policy registration failure. Bump to 5 policies, and also add a warning for when we have exceeded the global amount. If we have to touch this again, we should switch to a dynamic scheme instead. Reported-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Tested-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 10 Sep, 2018 1 commit
-
-
git://git.infradead.org/nvmeJens Axboe authored
Pull single NVMe fix from Christoph. * 'nvme-4.19' of git://git.infradead.org/nvme: nvmet-rdma: fix possible bogus dereference under heavy load
-
- 06 Sep, 2018 1 commit
-
-
Konstantin Khlebnikov authored
Fix trivial use-after-free. This could be last reference to bfqg. Fixes: 8f9bebc3 ("block, bfq: access and cache blkg data only when safe") Acked-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 05 Sep, 2018 2 commits
-
-
Mikulas Patocka authored
It is possible to call fsync on a read-only handle (for example, fsck.ext2 does it when doing read-only check), and this call results in kernel warning. The patch b089cfd9 ("block: don't warn for flush on read-only device") attempted to disable the warning, but it is buggy and it doesn't (op_is_flush tests flags, but bio_op strips off the flags). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: 721c7fc7 ("block: fail op_is_write() requests to read-only partitions") Cc: stable@vger.kernel.org # 4.18 Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Sagi Grimberg authored
Currently we always repost the recv buffer before we send a response capsule back to the host. Since ordering is not guaranteed for send and recv completions, it is posible that we will receive a new request from the host before we got a send completion for the response capsule. Today, we pre-allocate 2x rsps the length of the queue, but in reality, under heavy load there is nothing that is really preventing the gap to expand until we exhaust all our rsps. To fix this, if we don't have any pre-allocated rsps left, we dynamically allocate a rsp and make sure to free it when we are done. If under memory pressure we fail to allocate a rsp, we silently drop the command and wait for the host to retry. Reported-by: Steve Wise <swise@opengridcomputing.com> Tested-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> [hch: dropped a superflous assignment] Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 04 Sep, 2018 1 commit
-
-
Jens Axboe authored
syzbot reports a divide-by-zero off the NBD_SET_BLKSIZE ioctl. We need proper validation of the input here. Not just if it's zero, but also if the value is a power-of-2 and in a valid range. Add that. Cc: stable@vger.kernel.org Reported-by: syzbot <syzbot+25dbecbec1e62c6b0dd4@syzkaller.appspotmail.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 31 Aug, 2018 3 commits
-
-
Dennis Zhou (Facebook) authored
There is a very small change a bio gets caught up in a really unfortunate race between a task migration, cgroup exiting, and itself trying to associate with a blkg. This is due to css offlining being performed after the css->refcnt is killed which triggers removal of blkgs that reach their blkg->refcnt of 0. To avoid this, association with a blkg should use tryget and fallback to using the root_blkg. Fixes: 08e18eab ("block: add bi_blkg to the bio for cgroups") Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com> Cc: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Tejun Heo <tj@kernel.org> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Dennis Zhou (Facebook) authored
Currently, blkcg destruction relies on a sequence of events: 1. Destruction starts. blkcg_css_offline() is called and blkgs release their reference to the blkcg. This immediately destroys the cgwbs (writeback). 2. With blkgs giving up their reference, the blkcg ref count should become zero and eventually call blkcg_css_free() which finally frees the blkcg. Jiufei Xue reported that there is a race between blkcg_bio_issue_check() and cgroup_rmdir(). To remedy this, blkg destruction becomes contingent on the completion of all writeback associated with the blkcg. A count of the number of cgwbs is maintained and once that goes to zero, blkg destruction can follow. This should prevent premature blkg destruction related to writeback. The new process for blkcg cleanup is as follows: 1. Destruction starts. blkcg_css_offline() is called which offlines writeback. Blkg destruction is delayed on the cgwb_refcnt count to avoid punting potentially large amounts of outstanding writeback to root while maintaining any ongoing policies. Here, the base cgwb_refcnt is put back. 2. When the cgwb_refcnt becomes zero, blkcg_destroy_blkgs() is called and handles destruction of blkgs. This is where the css reference held by each blkg is released. 3. Once the blkcg ref count goes to zero, blkcg_css_free() is called. This finally frees the blkg. It seems in the past blk-throttle didn't do the most understandable things with taking data from a blkg while associating with current. So, the simplification and unification of what blk-throttle is doing caused this. Fixes: 08e18eab ("block: add bi_blkg to the bio for cgroups") Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com> Cc: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Tejun Heo <tj@kernel.org> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Dennis Zhou (Facebook) authored
This reverts commit 4c699480. Destroying blkgs is tricky because of the nature of the relationship. A blkg should go away when either a blkcg or a request_queue goes away. However, blkg's pin the blkcg to ensure they remain valid. To break this cycle, when a blkcg is offlined, blkgs put back their css ref. This eventually lets css_free() get called which frees the blkcg. The above commit (4c699480) breaks this order of events by trying to destroy blkgs in css_free(). As the blkgs still hold references to the blkcg, css_free() is never called. The race between blkcg_bio_issue_check() and cgroup_rmdir() will be addressed in the following patch by delaying destruction of a blkg until all writeback associated with the blkcg has been finished. Fixes: 4c699480 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()") Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com> Cc: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 29 Aug, 2018 2 commits
-
-
git://git.infradead.org/nvmeJens Axboe authored
Pull NVMe fixes from Christoph. * 'nvme-4.19' of git://git.infradead.org/nvme: nvmet: free workqueue object if module init fails nvme-fcloop: Fix dropped LS's to removed target port nvme-pci: add a memory barrier to nvme_dbbuf_update_and_check_event
-
Scott Bauer authored
Like d88b6d04: "cdrom: information leak in cdrom_ioctl_media_changed()" There is another cast from unsigned long to int which causes a bounds check to fail with specially crafted input. The value is then used as an index in the slot array in cdrom_slot_status(). Signed-off-by: Scott Bauer <scott.bauer@intel.com> Signed-off-by: Scott Bauer <sbauer@plzdonthack.me> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 28 Aug, 2018 5 commits
-
-
Chaitanya Kulkarni authored
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
James Smart authored
When a targetport is removed from the config, fcloop will avoid calling the LS done() routine thinking the targetport is gone. This leaves the initiator reset/reconnect hanging as it waits for a status on the Create_Association LS for the reconnect. Change the filter in the LS callback path. If tport null (set when failed validation before "sending to remote port"), be sure to call done. This was the main bug. But, continue the logic that only calls done if tport was set but there is no remoteport (e.g. case where remoteport has been removed, thus host doesn't expect a completion). Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Michal Wnukowski authored
In many architectures loads may be reordered with older stores to different locations. In the nvme driver the following two operations could be reordered: - Write shadow doorbell (dbbuf_db) into memory. - Read EventIdx (dbbuf_ei) from memory. This can result in a potential race condition between driver and VM host processing requests (if given virtual NVMe controller has a support for shadow doorbell). If that occurs, then the NVMe controller may decide to wait for MMIO doorbell from guest operating system, and guest driver may decide not to issue MMIO doorbell on any of subsequent commands. This issue is purely timing-dependent one, so there is no easy way to reproduce it. Currently the easiest known approach is to run "Oracle IO Numbers" (orion) that is shipped with Oracle DB: orion -run advanced -num_large 0 -size_small 8 -type rand -simulate \ concat -write 40 -duration 120 -matrix row -testname nvme_test Where nvme_test is a .lun file that contains a list of NVMe block devices to run test against. Limiting number of vCPUs assigned to given VM instance seems to increase chances for this bug to occur. On test environment with VM that got 4 NVMe drives and 1 vCPU assigned the virtual NVMe controller hang could be observed within 10-20 minutes. That correspond to about 400-500k IO operations processed (or about 100GB of IO read/writes). Orion tool was used as a validation and set to run in a loop for 36 hours (equivalent of pushing 550M IO operations). No issues were observed. That suggest that the patch fixes the issue. Fixes: f9f38e33 ("nvme: improve performance for virtual NVMe devices") Signed-off-by: Michal Wnukowski <wnukowski@google.com> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> [hch: updated changelog and comment a bit] Signed-off-by: Christoph Hellwig <hch@lst.de>
-
John Pittman authored
Currently, variable ref_count within the bsg_device struct is of type atomic_t. For variables being used as reference counters, the refcount API should be used instead of atomic. The newer refcount API works to prevent counter overflows and use-after-free bugs. So, move this varable from the atomic API to refcount, potentially avoiding the issues mentioned. Signed-off-by: John Pittman <jpittman@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Chengguang Xu authored
kmem_cache_destroy() can handle NULL pointer correctly, so there is no need to check e->icq_cache before calling kmem_cache_destroy(). Signed-off-by: Chengguang Xu <cgxu519@gmx.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 27 Aug, 2018 10 commits
-
-
Linus Walleij authored
The DMA is broken on this specific device for some unknown reason (probably badly designed or plain broken interface electronics) and will only work with PIO. Other users of the same hardware does not have this problem. Add a specific quirk so that this Gemini device gets DMA turned off. Also fix up some code around passing the port information around in probe while we're at it. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
We already note and mark discard and swap IO from bio_to_wbt_flags(). Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
Merge branch 'stable/for-jens-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-linus Pull Xen block driver fixes from Konrad: "Fix for flushing out persistent pages at a deterministic rate" * 'stable/for-jens-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen: xen/blkback: remove unused pers_gnts_lock from struct xen_blkif_ring xen/blkback: move persistent grants flags to bool xen/blkfront: reorder tests in xlblk_init() xen/blkfront: cleanup stale persistent grants xen/blkback: don't keep persistent grants too long
-
Jens Axboe authored
We have two potential issues: 1) After commit 2887e41b, we only wake one process at the time when we finish an IO. We really want to wake up as many tasks as can queue IO. Before this commit, we woke up everyone, which could cause a thundering herd issue. 2) A task can potentially consume two wakeups, causing us to (in practice) miss a wakeup. Fix both by providing our own wakeup function, which stops __wake_up_common() from waking up more tasks if we fail to get a queueing token. With the strict ordering we have on the wait list, this wakes the right tasks and the right amount of tasks. Based on a patch from Jianchao Wang <jianchao.w.wang@oracle.com>. Tested-by: Agarwal, Anchal <anchalag@amazon.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
Prep patch for calling the handler from a different context, no functional changes in this patch. Tested-by: Agarwal, Anchal <anchalag@amazon.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Juergen Gross authored
pers_gnts_lock isn't being used anywhere. Remove it. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Juergen Gross authored
The struct persistent_gnt flags member is meant to be a bitfield of different flags. There is only PERSISTENT_GNT_ACTIVE flag left, so convert it to a bool named "active". Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Juergen Gross authored
In case we don't want pv block devices we should not test parameters for sanity and eventually print out error messages. So test precluding conditions before checking parameters. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Juergen Gross authored
Add a periodic cleanup function to remove old persistent grants which are no longer in use on the backend side. This avoids starvation in case there are lots of persistent grants for a device which no longer is involved in I/O business. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Juergen Gross authored
Persistent grants are allocated until a threshold per ring is being reached. Those grants won't be freed until the ring is being destroyed meaning there will be resources kept busy which might no longer be used. Instead of freeing only persistent grants until the threshold is reached add a timestamp and remove all persistent grants not having been in use for a minute. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 25 Aug, 2018 6 commits
-
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: "A few small fixes for this merge window: - Locking imbalance fix for bcache (Shan Hai) - A few small fixes for wbt. One is a cleanup/prep, one is a fix for an existing issue, and the last two are fixes for changes that went into this merge window (me)" * tag 'for-linus-20180825' of git://git.kernel.dk/linux-block: blk-wbt: don't maintain inflight counts if disabled blk-wbt: fix has-sleeper queueing check blk-wbt: use wq_has_sleeper() for wq active check blk-wbt: move disable check into get_limit() bcache: release dc->writeback_lock properly in bch_writeback_thread()
-
git://git.infradead.org/linux-ubifsLinus Torvalds authored
Pull UBIFS fix from Richard Weinberger: "Remove an empty file from UBIFS source" * tag 'upstream-4.19-rc1-fix' of git://git.infradead.org/linux-ubifs: ubifs: Remove empty file.h
-
git://git.samba.org/sfrench/cifs-2.6Linus Torvalds authored
Pull cifs fixes from Steve French: "Three small SMB3 fixes, one for stable" * tag '4.19-rc-smb3' of git://git.samba.org/sfrench/cifs-2.6: cifs: update internal module version number for cifs.ko to 2.12 cifs: check kmalloc before use cifs: check if SMB2 PDU size has been padded and suppress the warning cifs: create a define for how many iovs we need for an SMB2_open()
-
Linus Torvalds authored
This is not normally noticeable, but repeated forks are unnecessarily expensive because they repeatedly dirty the parent page tables during the page table copy operation. It's trivial to just avoid write protecting the page table entry if it was already not writable. This patch was inspired by https://bugzilla.kernel.org/show_bug.cgi?id=200447 which points to an ancient "waste time re-doing fork" issue in the presence of lots of signals. That bug was fixed by Eric Biederman's signal handling series culminating in commit c3ad2c3b ("signal: Don't restart fork when signals come in"), but the unnecessary work for repeated forks is still work just fixing, particularly since the fix is trivial. Cc: Eric Biederman <ebiederm@xmission.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Colin Ian King authored
At the point where r is being checked for different values, r is always going to be equal to 2 as the previous if statements jump to end or end1 if r is not 2. Hence the assignment to err can be simplified to just err an assignment without any checks on the value or r. Detected by CoverityScan, CID#1226737 ("Logically dead code") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
Tejun Heo wrote: > > I asked Jens whether he could take care of the libata tree and he > thankfully agreed, so, from now on, Jens will be the libata > maintainer. > > Thanks a lot! Thanks for your work in this area. I still remember the first linux storage summit we did in Vancouver 2001, Tejun was invited to talk about his libata error handling work. Before that, it was basically a crap shoot if we recovered properly or not... A lot of water has flown under the bridge since then! Here's an "official" patch. Linus, can you apply it? Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 24 Aug, 2018 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tj/libataLinus Torvalds authored
Pull libata updates from Tejun Heo: "Nothing too interesting. Mostly ahci and ahci_platform changes, many around power management" * 'for-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata: (22 commits) ata: ahci_platform: enable to get and control reset ata: libahci_platform: add reset control support ata: add an extra argument to ahci_platform_get_resources() ata: sata_rcar: Add r8a77965 support ata: sata_rcar: exclude setting of PHY registers in Gen3 ata: sata_rcar: really mask all interrupts on Gen2 and later Revert "ata: ahci_platform: allow disabling of hotplug to save power" ata: libahci: Allow reconfigure of DEVSLP register ata: libahci: Correct setting of DEVSLP register ata: ahci: Enable DEVSLP by default on x86 with SLP_S0 ata: ahci: Support state with min power but Partial low power state Revert "ata: ahci_platform: convert kcalloc to devm_kcalloc" ata: sata_rcar: Add rudimentary Runtime PM support ata: sata_rcar: Provide a short-hand for &pdev->dev ata: Only output sg element mapped number in verbose debug ata: Guard ata_scsi_dump_cdb() by ATA_VERBOSE_DEBUG ata: ahci_platform: convert kcalloc to devm_kcalloc ata: ahci_platform: convert kzallloc to kcalloc ata: ahci_platform: correct parameter documentation for ahci_platform_shutdown libata: remove ata_sff_data_xfer_noirq() ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroupLinus Torvalds authored
Pull cgroup updates from Tejun Heo: "Just one commit from Steven to take out spin lock from trace event handlers" * 'for-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cgroup/tracing: Move taking of spin lock out of trace event handlers
-
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wqLinus Torvalds authored
Pull workqueue updates from Tejun Heo: "Over the lockdep cross-release churn, workqueue lost some of the existing annotations. Johannes Berg restored it and also improved them" * 'for-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: re-add lockdep dependencies for flushing workqueue: skip lockdep wq dependency in cancel_work_sync()
-
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds authored
Pull IOMMU updates from Joerg Roedel: - PASID table handling updates for the Intel VT-d driver. It implements a global PASID space now so that applications usings multiple devices will just have one PASID. - A new config option to make iommu passthroug mode the default. - New sysfs attribute for iommu groups to export the type of the default domain. - A debugfs interface (for debug only) usable by IOMMU drivers to export internals to user-space. - R-Car Gen3 SoCs support for the ipmmu-vmsa driver - The ARM-SMMU now aborts transactions from unknown devices and devices not attached to any domain. - Various cleanups and smaller fixes all over the place. * tag 'iommu-updates-v4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (42 commits) iommu/omap: Fix cache flushes on L2 table entries iommu: Remove the ->map_sg indirection iommu/arm-smmu-v3: Abort all transactions if SMMU is enabled in kdump kernel iommu/arm-smmu-v3: Prevent any devices access to memory without registration iommu/ipmmu-vmsa: Don't register as BUS IOMMU if machine doesn't have IPMMU-VMSA iommu/ipmmu-vmsa: Clarify supported platforms iommu/ipmmu-vmsa: Fix allocation in atomic context iommu: Add config option to set passthrough as default iommu: Add sysfs attribyte for domain type iommu/arm-smmu-v3: sync the OVACKFLG to PRIQ consumer register iommu/arm-smmu: Error out only if not enough context interrupts iommu/io-pgtable-arm-v7s: Abort allocation when table address overflows the PTE iommu/io-pgtable-arm: Fix pgtable allocation in selftest iommu/vt-d: Remove the obsolete per iommu pasid tables iommu/vt-d: Apply per pci device pasid table in SVA iommu/vt-d: Allocate and free pasid table iommu/vt-d: Per PCI device pasid table interfaces iommu/vt-d: Add for_each_device_domain() helper iommu/vt-d: Move device_domain_info to header iommu/vt-d: Apply global PASID in SVA ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linuxLinus Torvalds authored
Pull thermal management updates from Zhang Rui: - Add Daniel Lezcano as the reviewer of thermal framework and SoC driver changes (Daniel Lezcano). - Fix a bug in intel_dts_soc_thermal driver, which does not translate IO-APIC GSI (Global System Interrupt) into Linux irq number (Hans de Goede). - For device tree bindings, allow cooling devices sharing same trip point with same contribution value to share cooling map (Viresh Kumar). * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux: dt-bindings: thermal: Allow multiple devices to share cooling map MAINTAINERS: Add Daniel Lezcano as designated reviewer for thermal Thermal: Intel SoC DTS: Translate IO-APIC GSI number to linux irq number
-