- 25 Jan, 2015 1 commit
-
-
Shaohua Li authored
Basically move the sas ata tag allocation to libata-scsi.c to make it clear these staffs are just for sas. Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 23 Jan, 2015 4 commits
-
-
Shaohua Li authored
libata uses its own tag management which is duplication and the implementation is poor. And if we switch to blk-mq, tag is build-in. It's time to switch to generic taging. The SAS driver has its own tag management, and looks we can't directly map the host controler tag to SATA tag. So I just bypassed the SAS case. I changed the code/variable name for the tag management of libata to make it self contained. Only sas will use it. Later if libsas implements its tag management, the tag management code in libata can be deleted easily. Cc: Jens Axboe <axboe@fb.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Shaohua Li <shli@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
We need the tagging changes for the libata conversion.
-
Shaohua Li authored
This is the blk-mq part to support tag allocation policy. The default allocation policy isn't changed (though it's not a strict FIFO). The new policy is round-robin for libata. But it's a try-best implementation. If multiple tasks are competing, the tags returned will be mixed (which is unavoidable even with !mq, as requests from different tasks can be mixed in queue) Cc: Jens Axboe <axboe@fb.com> Cc: Tejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Shaohua Li authored
The libata tag allocation is using a round-robin policy. Next patch will make libata use block generic tag allocation, so let's add a policy to tag allocation. Currently two policies: FIFO (default) and round-robin. Cc: Jens Axboe <axboe@fb.com> Cc: Tejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 22 Jan, 2015 2 commits
-
-
Boaz Harrosh authored
As Christoph put it: Can we just get rid of the warnings? It's fairly annoying as devices without partitions are perfectly fine and very useful. Me too I see this message every VM boot for ages on all my devices. Would love to just remove it. For me a partition-table is only needed for a booting BIOS, grub, and stuff. CC: Christoph Hellwig <hch@infradead.org> Signed-off-by: Boaz Harrosh <boaz@plexistor.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
kaoudis authored
Converting from to blk-queue got rid of the driver's RCU locking-on-queue, so removing unnecessary RCU locking-on-queue artefacts. Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Kelly Nicole Kaoudis <kaoudis@colorado.edu> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 21 Jan, 2015 2 commits
-
-
Martin K. Petersen authored
blkdev_issue_discard() will zero a given block range. This is done by way of explicit writing, thus provisioning or allocating the blocks on disk. There are use cases where the desired behavior is to zero the blocks but unprovision them if possible. The blocks must deterministically contain zeroes when they are subsequently read back. This patch adds a flag to blkdev_issue_zeroout() that provides this variant. If the discard flag is set and a block device guarantees discard_zeroes_data we will use REQ_DISCARD to clear the block range. If the device does not support discard_zeroes_data or if the discard request fails we will fall back to first REQ_WRITE_SAME and then a regular REQ_WRITE. Also update the callers of blkdev_issue_zero() to reflect the new flag and make sb_issue_zeroout() prefer the discard approach. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jeff Moyer authored
Hi, If you can manage to submit an async write as the first async I/O from the context of a process with realtime scheduling priority, then a cfq_queue is allocated, but filed into the wrong async_cfqq bucket. It ends up in the best effort array, but actually has realtime I/O scheduling priority set in cfqq->ioprio. The reason is that cfq_get_queue assumes the default scheduling class and priority when there is no information present (i.e. when the async cfqq is created): static struct cfq_queue * cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct cfq_io_cq *cic, struct bio *bio, gfp_t gfp_mask) { const int ioprio_class = IOPRIO_PRIO_CLASS(cic->ioprio); const int ioprio = IOPRIO_PRIO_DATA(cic->ioprio); cic->ioprio starts out as 0, which is "invalid". So, class of 0 (IOPRIO_CLASS_NONE) is passed to cfq_async_queue_prio like so: async_cfqq = cfq_async_queue_prio(cfqd, ioprio_class, ioprio); static struct cfq_queue ** cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio) { switch (ioprio_class) { case IOPRIO_CLASS_RT: return &cfqd->async_cfqq[0][ioprio]; case IOPRIO_CLASS_NONE: ioprio = IOPRIO_NORM; /* fall through */ case IOPRIO_CLASS_BE: return &cfqd->async_cfqq[1][ioprio]; case IOPRIO_CLASS_IDLE: return &cfqd->async_idle_cfqq; default: BUG(); } } Here, instead of returning a class mapped from the process' scheduling priority, we get back the bucket associated with IOPRIO_CLASS_BE. Now, there is no queue allocated there yet, so we create it: cfqq = cfq_find_alloc_queue(cfqd, is_sync, cic, bio, gfp_mask); That function ends up doing this: cfq_init_cfqq(cfqd, cfqq, current->pid, is_sync); cfq_init_prio_data(cfqq, cic); cfq_init_cfqq marks the priority as having changed. Then, cfq_init_prio data does this: ioprio_class = IOPRIO_PRIO_CLASS(cic->ioprio); switch (ioprio_class) { default: printk(KERN_ERR "cfq: bad prio %x\n", ioprio_class); case IOPRIO_CLASS_NONE: /* * no prio set, inherit CPU scheduling settings */ cfqq->ioprio = task_nice_ioprio(tsk); cfqq->ioprio_class = task_nice_ioclass(tsk); break; So we basically have two code paths that treat IOPRIO_CLASS_NONE differently, which results in an RT async cfqq filed into a best effort bucket. Attached is a patch which fixes the problem. I'm not sure how to make it cleaner. Suggestions would be welcome. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Tested-by: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Cc: stable@kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 16 Jan, 2015 1 commit
-
-
Jens Axboe authored
null_blk is partitionable, but it doesn't store any of the info. When it is loaded, you would normally see: [1226739.343608] nullb0: unknown partition table [1226739.343746] nullb1: unknown partition table which can confuse some people. Add the appropriate gendisk flag to suppress this info. Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 14 Jan, 2015 6 commits
-
-
Jens Axboe authored
The blk-mq tagging tries to maintain some locality between CPUs and the tags issued. The tags are split into groups of words, and the words may not be fully populated. When searching for a new free tag, blk-mq may look at partial words, hence it passes in an offset/size to find_next_zero_bit(). However, it does that wrong, the size must always be the full length of the number of tags in that word, otherwise we'll potentially miss some near the end. Another issue is when __bt_get() goes from one word set to the next. It bumps the index, but not the last_tag associated with the previous index. Bump that to be in the range of the new word. Finally, clean up __bt_get() and __bt_get_word() a bit and get rid of the goto in there, and the unnecessary 'wrap' variable. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Boaz Harrosh authored
Because of the direct_access() API which returns a PFN. partitions better start on 4K boundary, else offset ZERO of a partition will not be aligned and blk_direct_access() will fail the call. By setting blk_queue_physical_block_size(PAGE_SIZE) we can communicate this to fdisk and friends. The call to blk_queue_physical_block_size() is harmless and will not affect the Kernel behavior in any way. It is only for communication to user-mode. before this patch running fdisk on a default size brd of 4M the first sector offered is 34 (BAD), but after this patch it will be 40, ie 8 sectors aligned. Also when entering some random partition sizes the next partition-start sector is offered 8 sectors aligned after this patch. (Please note that with fdisk the user can still enter bad values, only the offered default values will be correct) Note that with bdev-size > 4M fdisk will try to align on a 1M boundary (above first-sector will be 2048), in any case. CC: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Boaz Harrosh <boaz@plexistor.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Boaz Harrosh authored
This patch fixes up brd's partitions scheme, now enjoying all worlds. The MAIN fix here is that currently, if one fdisks some partitions, a BAD bug will make all partitions point to the same start-end sector ie: 0 - brd_size And an mkfs of any partition would trash the partition table and the other partition. Another fix is that "mount -U uuid" will not work if show_part was not specified, because of the GENHD_FL_SUPPRESS_PARTITION_INFO flag. We now always load without it and remove the show_part parameter. [We remove Dmitry's new module-param part_show it is now always show] So NOW the logic goes like this: * max_part - Just says how many minors to reserve between ramX devices. In any way, there can be as many partition as requested. If minors between devices ends, then dynamic 259-major ids will be allocated on the fly. The default is now max_part=1, which means all partitions devt(s) will be from the dynamic (259) major-range. (If persistent partition minors is needed use max_part=X) For example with /dev/sdX max_part is hard coded 16. * Creation of new devices on the fly still/always work: mknod /path/devnod b 1 X fdisk -l /path/devnod Will create a new device if [X / max_part] was not already created before. (Just as before) partitions on the dynamically created device will work as well Same logic applies with minors as with the pre-created ones. TODO: dynamic grow of device size. So each device can have it's own size. CC: Dmitry Monakhov <dmonakhov@openvz.org> Tested-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Boaz Harrosh <boaz@plexistor.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
-
Matthew Wilcox authored
In order to support accesses to larger chunks of memory, pass in a 'size' parameter (counted in bytes), and return the amount available at that address. Add a new helper function, bdev_direct_access(), to handle common functionality including partition handling, checking the length requested is positive, checking for the sector being page-aligned, and checking the length of the request does not pass the end of the partition. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Boaz Harrosh <boaz@plexistor.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matthew Wilcox authored
The 'pfn' returned by axonram was completely bogus, and has been since 2008. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 02 Jan, 2015 8 commits
-
-
Jens Axboe authored
Looks like we pull it in through other ways on x86, but we fail on sparc: In file included from drivers/block/cryptoloop.c:30:0: drivers/block/loop.h:63:24: error: field 'tag_set' has incomplete type struct blk_mq_tag_set tag_set; Add the include to loop.h, kill it from loop.c. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Ming Lei authored
block core handles REQ_FUA by its flush state machine, so won't do it in loop explicitly. Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Ming Lei authored
No behaviour change, just move the handling for REQ_DISCARD and REQ_FLUSH in these two functions. Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Ming Lei authored
Switch to block request completely. Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Ming Lei authored
The conversion is a bit straightforward, and use work queue to dispatch requests of loop block, and one big change is that requests is submitted to backend file/device concurrently with work queue, so throughput may get improved much. Given write requests over same file are often run exclusively, so don't handle them concurrently for avoiding extra context switch cost, possible lock contention and work schedule cost. Also with blk-mq, there is opportunity to get loop I/O merged before submitting to backend file/device. In the following test: - base: v3.19-rc2-2041231 - loop over file in ext4 file system on SSD disk - bs: 4k, libaio, io depth: 64, O_DIRECT, num of jobs: 1 - throughput: IOPS ------------------------------------------------------ | | base | base with loop-mq | delta | ------------------------------------------------------ | randread | 1740 | 25318 | +1355%| ------------------------------------------------------ | read | 42196 | 51771 | +22.6%| ----------------------------------------------------- | randwrite | 35709 | 34624 | -3% | ----------------------------------------------------- | write | 39137 | 40326 | +3% | ----------------------------------------------------- So loop-mq can improve throughput for both read and randread, meantime, performance of write and randwrite isn't hurted basically. Another benefit is that loop driver code gets simplified much after blk-mq conversion, and the patch can be thought as cleanup too. Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
Commit b4c6a028 exported the start and unfreeze, but we need the regular blk_mq_freeze_queue() for the loop conversion. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
Linux 3.19-rc2
-
Ming Lei authored
Check IS_ERR_OR_NULL(return value) instead of just return value. Signed-off-by: Ming Lei <ming.lei@canonical.com> Reduced to IS_ERR() by me, we never return NULL. Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 31 Dec, 2014 1 commit
-
-
Jens Axboe authored
If it's dying, we can't expect new request to complete and come in an wake up other tasks waiting for requests. So after we have marked it as dying, wake up everybody currently waiting for a request. Once they wake, they will retry their allocation and fail appropriately due to the state of the queue. Tested-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 29 Dec, 2014 1 commit
-
-
Linus Torvalds authored
-
- 28 Dec, 2014 4 commits
-
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pull KVM fixes from Paolo Bonzini: "The important fixes are for two bugs introduced by the merge window. On top of this, add a couple of WARN_ONs and stop spamming dmesg on pretty much every boot of a virtual machine" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: kvm: warn on more invariant breakage kvm: fix sorting of memslots with base_gfn == 0 kvm: x86: drop severity of "generation wraparound" message kvm: x86: vmx: reorder some msr writing
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull vfs fix from Al Viro: "An embarrassing bug in lustre patches from this cycle ;-/" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: [regression] braino in "lustre: use is_root_inode()"
-
Paolo Bonzini authored
Modifying a non-existent slot is not allowed. Also check that the first loop doesn't move a deleted slot beyond the used part of the mslots array. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Before commit 0e60b079 (kvm: change memslot sorting rule from size to GFN, 2014-12-01), the memslots' sorting key was npages, meaning that a valid memslot couldn't have its sorting key equal to zero. On the other hand, a valid memslot can have base_gfn == 0, and invalid memslots are identified by base_gfn == npages == 0. Because of this, commit 0e60b079 broke the invariant that invalid memslots are at the end of the mslots array. When a memslot with base_gfn == 0 was created, any invalid memslot before it were left in place. This can be fixed by changing the insertion to use a ">=" comparison instead of "<=", but some care is needed to avoid breaking the case of deleting a memslot; see the comment in update_memslots. Thanks to Tiejun Chen for posting an initial patch for this bug. Reported-by: Jamie Heilman <jamie@audible.transient.net> Reported-by: Andy Lutomirski <luto@amacapital.net> Tested-by: Jamie Heilman <jamie@audible.transient.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 27 Dec, 2014 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/soundLinus Torvalds authored
Pull sound fixes from Takashi Iwai: "Just a couple of fixes for the new Intel Skylake HD-audio support" * tag 'sound-3.19-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: ALSA: hda_intel: apply the Seperate stream_tag for Skylake ALSA: hda_controller: Separate stream_tag for input and output streams.
-
Paolo Bonzini authored
Since most virtual machines raise this message once, it is a bit annoying. Make it KERN_DEBUG severity. Cc: stable@vger.kernel.org Fixes: 7a2e8aafSigned-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Tiejun Chen authored
The commit 34a1cd60, "x86: vmx: move some vmx setting from vmx_init() to hardware_setup()", tried to refactor some codes specific to vmx hardware setting into hardware_setup(), but some msr writing should depend on our previous setting condition like enable_apicv, enable_ept and so on. Reported-by: Jamie Heilman <jamie@audible.transient.net> Tested-by: Jamie Heilman <jamie@audible.transient.net> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Al Viro authored
In one of the places (ll_md_blocking_ast()) we had open-coded !is_root_inode(inode) and replaced it with is_root_inode(inode). See the last chunk of f76c23: - inode != inode->i_sb->s_root->d_inode) + is_root_inode(inode)) should've been + !is_root_inode(inode)) obviously... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
- 26 Dec, 2014 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linuxLinus Torvalds authored
Pull parisc build fix from Helge Deller: "This unbreaks the kernel compilation on parisc with gcc-4.9" * 'parisc-3.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: fix out-of-register compiler error in ldcw inline assembler function
-
John David Anglin authored
The __ldcw macro has a problem when its argument needs to be reloaded from memory. The output memory operand and the input register operand both need to be reloaded using a register in class R1_REGS when generating 64-bit code. This fails because there's only a single register in the class. Instead, use a memory clobber. This also makes the __ldcw macro a compiler memory barrier. Signed-off-by: John David Anglin <dave.anglin@bell.net> Cc: <stable@vger.kernel.org> [3.13+] Signed-off-by: Helge Deller <deller@gmx.de>
-
Libin Yang authored
The total stream number of Skylake's input and output stream exceeds 15, which will cause some streams do not work because of the overflow on SDxCTL.STRM field if using the legacy stream tag allocation method. This patch uses the new stream tag allocation method by add the flag AZX_DCAPS_SEPARATE_STREAM_TAG for Skylake platform. Signed-off-by: Libin Yang <libin.yang@intel.com> Reviewed-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
-
Rafal Redzimski authored
Implemented separate stream_tag assignment for input and output streams. According to hda specification stream tag must be unique throughout the input streams group, however an output stream might use a stream tag which is already in use by an input stream. This change is necessary to support HW which provides a total of more than 15 stream DMA engines which with legacy implementation causes an overflow on SDxCTL.STRM field (and the whole SDxCTL register) and as a result usage of Reserved value 0 in the SDxCTL.STRM field which confuses HDA controller. Signed-off-by: Rafal Redzimski <rafal.f.redzimski@intel.com> Signed-off-by: Jayachandran B <jayachandran.b@intel.com> Signed-off-by: Libin Yang <libin.yang@intel.com> Reviewed-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
-
git://people.freedesktop.org/~airlied/linuxLinus Torvalds authored
Pull drm fixes from Dave Airlie: "Xmas fixes pull: core: one atomic fix, revert the WARN_ON dumb buffers patch. agp: fixup Dave J. nouveau: fix 3.18 regression for old userspace tegra fixes: vblank and iommu fixes amdkfd: fix bugs shown by testing with userspace, init apertures once msm: hdmi fixes and cleanup i915: misc fixes There is also a link ordering fix that I've asked to be cc'ed to you, putting iommu before gpu, it fixes an issue with amdkfd when things are all in the kernel, but I didn't like sending it via my tree without discussion. I'll probably be a bit on/off for a few weeks with pulls now, due to holidays and LCA, so don't be surprised if stuff gets a bit backed up, and things end up a bit large due to lag" * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (28 commits) Revert "drm/gem: Warn on illegal use of the dumb buffer interface v2" agp: Fix up email address & attributions in AGP MODULE_AUTHOR tags nouveau: bring back legacy mmap handler drm/msm/hdmi: rework HDMI IRQ handler drm/msm/hdmi: enable regulators before clocks to avoid warnings drm/msm/mdp5: update irqs on crtc<->encoder link change drm/msm: block incoming update on pending updates drm/atomic: fix potential null ptr on plane enable drm/msm: Deletion of unnecessary checks before the function call "release_firmware" drm/msm: Deletion of unnecessary checks before two function calls drm/tegra: dc: Select root window for event dispatch drm/tegra: gem: Use the proper size for GEM objects drm/tegra: gem: Flush buffer objects upon allocation drm/tegra: dc: Fix a potential race on page-flip completion drm/tegra: dc: Consistently use the same pipe drm/irq: Add drm_crtc_vblank_count() drm/irq: Add drm_crtc_handle_vblank() drm/irq: Add drm_crtc_send_vblank_event() drm/i915: Disable PSMI sleep messages on all rings around context switches drm/i915: Force the CS stall for invalidate flushes ...
-
- 25 Dec, 2014 1 commit
-
-
git://git.code.sf.net/p/openipmi/linux-ipmiLinus Torvalds authored
Pull ipmi driver bugfixes from Corey Minyard: "Fix two bugs: One that lockdep turned up, I didn't go far enough with cleanup of attributes for IPMI. This has been there a long time; my previous fix of this didn't fix all the attributes. One fix for some arches that need an explicit linux/ctype.h for isspace()" * tag 'for-linus-2' of git://git.code.sf.net/p/openipmi/linux-ipmi: ipmi: Fix compile issue with isspace() ipmi: Finish cleanup of BMC attributes
-