- 03 Mar, 2020 3 commits
-
-
Hou Tao authored
We neither assign congested_fn for requested-based blk-mq device nor implement it correctly. So fix both. Also, remove incorrect comment from dm_init_normal_md_queue and rename it to dm_init_congested_fn. Fixes: 4aa9c692 ("bdi: separate out congested state into a separate struct") Cc: stable@vger.kernel.org Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mike Snitzer authored
In cases where dec_in_flight() has to requeue the integrity_bio_wait work to transfer the rest of the data, the bio's __bi_remaining might already have been decremented to 0, e.g.: if bio passed to underlying data device was split via blk_queue_split(). Use dm_bio_{record,restore} rather than effectively open-coding them in dm-integrity -- these methods now manage __bi_remaining too. Depends-on: f7f0b057a9c1 ("dm bio record: save/restore bi_end_io and bi_integrity") Reported-by: Daniel Glöckner <dg@emlix.com> Suggested-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mike Snitzer authored
Also, save/restore __bi_remaining in case the bio was used in a BIO_CHAIN (e.g. due to blk_queue_split). Suggested-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 27 Feb, 2020 5 commits
-
-
Shin'ichiro Kawasaki authored
Dm-zoned initializes reference counters of new chunk works with zero value and refcount_inc() is called to increment the counter. However, the refcount_inc() function handles the addition to zero value as an error and triggers the warning as follows: refcount_t: addition on 0; use-after-free. WARNING: CPU: 7 PID: 1506 at lib/refcount.c:25 refcount_warn_saturate+0x68/0xf0 ... CPU: 7 PID: 1506 Comm: systemd-udevd Not tainted 5.4.0+ #134 ... Call Trace: dmz_map+0x2d2/0x350 [dm_zoned] __map_bio+0x42/0x1a0 __split_and_process_non_flush+0x14a/0x1b0 __split_and_process_bio+0x83/0x240 ? kmem_cache_alloc+0x165/0x220 dm_process_bio+0x90/0x230 ? generic_make_request_checks+0x2e7/0x680 dm_make_request+0x3e/0xb0 generic_make_request+0xcf/0x320 ? memcg_drain_all_list_lrus+0x1c0/0x1c0 submit_bio+0x3c/0x160 ? guard_bio_eod+0x2c/0x130 mpage_readpages+0x182/0x1d0 ? bdev_evict_inode+0xf0/0xf0 read_pages+0x6b/0x1b0 __do_page_cache_readahead+0x1ba/0x1d0 force_page_cache_readahead+0x93/0x100 generic_file_read_iter+0x83a/0xe40 ? __seccomp_filter+0x7b/0x670 new_sync_read+0x12a/0x1c0 vfs_read+0x9d/0x150 ksys_read+0x5f/0xe0 do_syscall_64+0x5b/0x180 entry_SYSCALL_64_after_hwframe+0x44/0xa9 ... After this warning, following refcount API calls for the counter all fail to change the counter value. Fix this by setting the initial reference counter value not zero but one for the new chunk works. Instead, do not call refcount_inc() via dmz_get_chunk_work() for the new chunks works. The failure was observed with linux version 5.4 with CONFIG_REFCOUNT_FULL enabled. Refcount rework was merged to linux version 5.5 by the commit 168829ad ("Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip"). After this commit, CONFIG_REFCOUNT_FULL was removed and the failure was observed regardless of kernel configuration. Linux version 4.20 merged the commit 092b5648 ("dm zoned: target: use refcount_t for dm zoned reference counters"). Before this commit, dm zoned used atomic_t APIs which does not check addition to zero, then this fix is not necessary. Fixes: 092b5648 ("dm zoned: target: use refcount_t for dm zoned reference counters") Cc: stable@vger.kernel.org # 5.4+ Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
Verify the watermark upon resume - so that if the target is reloaded with lower watermark, it will start the cleanup process immediately. Fixes: 48debafe ("dm: add writecache target") Cc: stable@vger.kernel.org # 4.18+ Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
The function dm_suspended returns true if the target is suspended. However, when the target is being suspended during unload, it returns false. An example where this is a problem: the test "!dm_suspended(wc->ti)" in writecache_writeback is not sufficient, because dm_suspended returns zero while writecache_suspend is in progress. As is, without an enhanced dm_suspended, simply switching from flush_workqueue to drain_workqueue still emits warnings: workqueue writecache-writeback: drain_workqueue() isn't complete after 10 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 100 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 200 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 300 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 400 tries writecache_suspend calls flush_workqueue(wc->writeback_wq) - this function flushes the current work. However, the workqueue may re-queue itself and flush_workqueue doesn't wait for re-queued works to finish. Because of this - the function writecache_writeback continues execution after the device was suspended and then concurrently with writecache_dtr, causing a crash in writecache_writeback. We must use drain_workqueue - that waits until the work and all re-queued works finish. As a prereq for switching to drain_workqueue, this commit fixes dm_suspended to return true after the presuspend hook and before the postsuspend hook - just like during a normal suspend. It allows simplifying the dm-integrity and dm-writecache targets so that they don't have to maintain suspended flags on their own. With this change use of drain_workqueue() can be used effectively. This change was tested with the lvm2 testsuite and cryptsetup testsuite and the are no regressions. Fixes: 48debafe ("dm: add writecache target") Cc: stable@vger.kernel.org # 4.18+ Reported-by: Corey Marthaler <cmarthal@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Theodore Ts'o authored
[ 3934.173244] ====================================================== [ 3934.179572] WARNING: possible circular locking dependency detected [ 3934.185884] 5.4.21-xfstests #1 Not tainted [ 3934.190151] ------------------------------------------------------ [ 3934.196673] dmsetup/8897 is trying to acquire lock: [ 3934.201688] ffffffffbce82b18 (shrinker_rwsem){++++}, at: unregister_shrinker+0x22/0x80 [ 3934.210268] but task is already holding lock: [ 3934.216489] ffff92a10cc5e1d0 (&pmd->root_lock){++++}, at: dm_pool_metadata_close+0xba/0x120 [ 3934.225083] which lock already depends on the new lock. [ 3934.564165] Chain exists of: shrinker_rwsem --> &journal->j_checkpoint_mutex --> &pmd->root_lock For a more detailed lockdep report, please see: https://lore.kernel.org/r/20200220234519.GA620489@mit.edu We shouldn't need to hold the lock while are just tearing down and freeing the whole metadata pool structure. Fixes: 44d8ebf4 ("dm thin metadata: use pool locking at end of dm_pool_metadata_close") Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
The crash can be reproduced by running the lvm2 testsuite test lvconvert-thin-external-cache.sh for several minutes, e.g.: while :; do make check T=shell/lvconvert-thin-external-cache.sh; done The crash happens in this call chain: do_waker -> policy_tick -> smq_tick -> end_hotspot_period -> clear_bitset -> memset -> __memset -- which accesses an invalid pointer in the vmalloc area. The work entry on the workqueue is executed even after the bitmap was freed. The problem is that cancel_delayed_work doesn't wait for the running work item to finish, so the work item can continue running and re-submitting itself even after cache_postsuspend. In order to make sure that the work item won't be running, we must use cancel_delayed_work_sync. Also, change flush_workqueue to drain_workqueue, so that if some work item submits itself or another work item, we are properly waiting for both of them. Fixes: c6b4fcba ("dm: add cache target") Cc: stable@vger.kernel.org # v3.9 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 25 Feb, 2020 3 commits
-
-
Mikulas Patocka authored
If the flag SB_FLAG_RECALCULATE is present in the superblock, but it was not specified on the command line (i.e. ic->recalculate_flag is false), dm-integrity would return invalid table line - the reported number of arguments would not match the real number. Fixes: 468dfca3 ("dm integrity: add a bitmap mode") Cc: stable@vger.kernel.org # v5.2+ Reported-by: Ondrej Kozina <okozina@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
If we need to perform synchronous I/O in dm_integrity_map_continue(), we must make sure that we are not in the map function - in order to avoid the deadlock due to bio queuing in generic_make_request. To avoid the deadlock, we offload the request to metadata_wq. However, metadata_wq also processes metadata updates for write requests. If there are too many requests that get offloaded to metadata_wq at the beginning of dm_integrity_map_continue, the workqueue metadata_wq becomes clogged and the system is incapable of processing any metadata updates. This causes a deadlock because all the requests that need to do metadata updates wait for metadata_wq to proceed and metadata_wq waits inside wait_and_add_new_range until some existing request releases its range lock (which doesn't happen because the range lock is released after metadata update). In order to fix the deadlock, we create a new workqueue offload_wq and offload requests to it - so that processing of offload_wq is independent from processing of metadata_wq. Fixes: 7eada909 ("dm: add integrity target") Cc: stable@vger.kernel.org # v4.12+ Reported-by: Heinz Mauelshagen <heinzm@redhat.com> Tested-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
If we resume a device in bitmap mode and the on-disk format is in journal mode, we must recalculate anything above ic->sb->recalc_sector. Otherwise, there would be non-recalculated blocks which would cause I/O errors. Fixes: 468dfca3 ("dm integrity: add a bitmap mode") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 27 Jan, 2020 1 commit
-
-
Mike Snitzer authored
Move blk_queue_make_request() to dm.c:alloc_dev() so that q->make_request_fn is never NULL during the lifetime of a DM device (even one that is created without a DM table). Otherwise generic_make_request() will crash simply by doing: dmsetup create -n test mount /dev/dm-N /mnt While at it, move ->congested_data initialization out of dm.c:alloc_dev() and into the bio-based specific init method. Reported-by: Stefan Bader <stefan.bader@canonical.com> BugLink: https://bugs.launchpad.net/bugs/1860231 Fixes: ff36ab34 ("dm: remove request-based logic from make_request_fn wrapper") Depends-on: c12c9a3c ("dm: various cleanups to md->queue initialization code") Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 16 Jan, 2020 1 commit
-
-
Mikulas Patocka authored
When dm-writecache is used with SSD as a cache device, it would submit a separate bio for each written block. The I/Os would be merged by the disk scheduler, but this merging degrades performance. Improve dm-writecache performance by submitting larger bios - this is possible as long as there is consecutive free space on the cache device. Benchmark (arm64 with 64k page size, using /dev/ram0 as a cache device): fio --bs=512k --iodepth=32 --size=400M --direct=1 \ --filename=/dev/mapper/cache --rw=randwrite --numjobs=1 --name=test block old new size MiB/s MiB/s --------------------- 512 181 700 1k 347 1256 2k 644 2020 4k 1183 2759 8k 1852 3333 16k 2469 3509 32k 2974 3670 64k 3404 3810 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 15 Jan, 2020 9 commits
-
-
Anatol Pomazau authored
Add a configurable timeout mechanism to disable queue_if_no_path without assistance from userspace multipathd. This reimplements multipathd's no_path_retry mechanism in kernel space. This is motivated by the desire to prevent processes from hanging indefinitely waiting for IO in cases where multipathd might be unable to respond (after a failure or for whatever reason). Despite replicating userspace multipathd's policy configuration in kernel space, it is important to prevent IOs from hanging forever, waiting for userspace that may be incapable of behaving correctly. Use of the provided "queue_if_no_path_timeout_secs" dm-multipath module parameter is optional. This timeout mechanism is disabled by default (by being set to 0). Signed-off-by: Anatol Pomazau <anatol@google.com> Co-developed-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
With commit fe64369163c5 ("dm thin: don't allow changing data device during thin-pool load") it is now possible to re-parent the data device's flush_bio from the pool_c to pool structure. Doing so offers improved lifetime guarantees for the flush_bio so that the call to dm_pool_register_pre_commit_callback can now be done safely from pool_ctr(). Depends-on: fe64369163c5 ("dm thin: don't allow changing data device during thin-pool load") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
The existing code allows changing the data device when the thin-pool target is reloaded. This capability is not required and only complicates device lifetime guarantees. This can cause crashes like the one reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1788596 where the kernel tries to issue a flush bio located in a structure that was already freed. Take the first step to simplifying the thin-pool's data device lifetime by disallowing changing it. Like the thin-pool's metadata device, the data device is now set in pool_create() and it cannot be changed for a given thin-pool. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mike Snitzer authored
dm-thin uses struct pool to hold the state of the pool. There may be multiple pool_c's pointing to a given pool, each pool_c represents a loaded target. pool_c's may be created and destroyed arbitrarily and the pool contains a reference count of pool_c's pointing to it. Since commit 694cfe7f ("dm thin: Flush data device before committing metadata") a pointer to pool_c is passed to dm_pool_register_pre_commit_callback and this function stores it in pmd->pre_commit_context. If this pool_c is freed, but pool is not (because there is another pool_c referencing it), we end up in a situation where pmd->pre_commit_context structure points to freed pool_c. It causes a crash in metadata_pre_commit_callback. Fix this by moving the dm_pool_register_pre_commit_callback() from pool_ctr() to pool_preresume(). This way the in-core thin-pool metadata is only ever armed with callback data whose lifetime matches the active thin-pool target. In should be noted that this fix preserves the ability to load a thin-pool table that uses a different data block device (that contains the same data) -- though it is unclear if that capability is still useful and/or needed. Fixes: 694cfe7f ("dm thin: Flush data device before committing metadata") Cc: stable@vger.kernel.org Reported-by: Zdenek Kabelac <zkabelac@redhat.com> Reported-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mike Snitzer authored
Ensure that the pool is locked during calls to __commit_transaction and __destroy_persistent_data_objects. Just being consistent with locking, but reality is dm_pool_metadata_close is called once pool is being destroyed so access to pool shouldn't be contended. Also, use pmd_write_lock_in_core rather than __pmd_write_lock in dm_pool_commit_metadata and rename __pmd_write_lock to pmd_write_lock_in_core -- there was no need for the alias. In addition, verify that the pool is locked in __commit_transaction(). Fixes: 873f258b ("dm thin metadata: do not write metadata if no changes occurred") Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
When committing state, the function writecache_flush does the following: 1. write metadata (writecache_commit_flushed) 2. flush disk cache (writecache_commit_flushed) 3. wait for data writes to complete (writecache_wait_for_ios) 4. increase superblock seq_count 5. write the superblock 6. flush disk cache It may happen that at step 3, when we wait for some write to finish, the disk may report the write as finished, but the write only hit the disk cache and it is not yet stored in persistent storage. At step 5 we write the superblock - it may happen that the superblock is written before the write that we waited for in step 3. If the machine crashes, it may result in incorrect data being returned after reboot. In order to fix the bug, we must swap steps 2 and 3 in the above sequence, so that we first wait for writes to complete and then flush the disk cache. Fixes: 48debafe ("dm: add writecache target") Cc: stable@vger.kernel.org # 4.18+ Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Milan Broz authored
If benbi IV is used in AEAD construction, for example: cryptsetup luksFormat <device> --cipher twofish-xts-benbi --key-size 512 --integrity=hmac-sha256 the constructor uses wrong skcipher function and crashes: BUG: kernel NULL pointer dereference, address: 00000014 ... EIP: crypt_iv_benbi_ctr+0x15/0x70 [dm_crypt] Call Trace: ? crypt_subkey_size+0x20/0x20 [dm_crypt] crypt_ctr+0x567/0xfc0 [dm_crypt] dm_table_add_target+0x15f/0x340 [dm_mod] Fix this by properly using crypt_aead_blocksize() in this case. Fixes: ef43aa38 ("dm crypt: add cryptographic data integrity protection (authenticated encryption)") Cc: stable@vger.kernel.org # v4.12+ Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=941051Reported-by: Jerad Simpson <jbsimpson@gmail.com> Signed-off-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Milan Broz authored
Add experimental support for BitLocker encryption with CBC mode and additional Elephant diffuser. The mode was used in older Windows systems and it is provided mainly for compatibility reasons. The userspace support to activate these devices is being added to cryptsetup utility. Read-write activation of such a device is very simple, for example: echo <password> | cryptsetup bitlkOpen bitlk_image.img test The Elephant diffuser uses two rotations in opposite direction for data (Diffuser A and B) and also XOR operation with Sector key over the sector data; Sector key is derived from additional key data. The original public documentation is available here: http://download.microsoft.com/download/0/2/3/0238acaf-d3bf-4a6d-b3d6-0a0be4bbb36e/bitlockercipher200608.pdf The dm-crypt implementation is embedded to "elephant" IV (similar to tcw IV construction). Because we cannot modify original bio data for write (before encryption), an additional internal flag to pre-process data is added. Signed-off-by: Milan Broz <gmazyland@gmail.com> Reviewed-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Joe Thornber authored
The space-maps track the reference counts for disk blocks allocated by both the thin-provisioning and cache targets. There are variants for tracking metadata blocks and data blocks. Transactionality is implemented by never touching blocks from the previous transaction, so we can rollback in the event of a crash. When allocating a new block we need to ensure the block is free (has reference count of 0) in both the current and previous transaction. Prior to this fix we were doing this by searching for a free block in the previous transaction, and relying on a 'begin' counter to track where the last allocation in the current transaction was. This 'begin' field was not being updated in all code paths (eg, increment of a data block reference count due to breaking sharing of a neighbour block in the same btree leaf). This fix keeps the 'begin' field, but now it's just a hint to speed up the search. Instead the current transaction is searched for a free block, and then the old transaction is double checked to ensure it's free. Much simpler. This fixes reports of sm_disk_new_block()'s BUG_ON() triggering when DM thin-provisioning's snapshots are heavily used. Reported-by: Eric Wheeler <dm-devel@lists.ewheeler.net> Cc: stable@vger.kernel.org Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 07 Jan, 2020 10 commits
-
-
xianrong.zhou authored
Try to skip prefetching hash blocks that won't be needed due to the "check_at_most_once" option being enabled and the corresponding data blocks already having been verified. Since prefetching operates on a range of data blocks, do this by just trimming the two ends of the range. This doesn't skip every unneeded hash block, since data blocks in the middle of the range could also be unneeded, and hash blocks are still prefetched in large clusters as controlled by dm_verity_prefetch_cluster. But it can still help a lot. In a test on Android Q launching 91 apps every 15s repeated 21 times, prefetching was only done for 447177/4776629 = 9.36% of data blocks. Tested-by: ruxian.feng <ruxian.feng@transsion.com> Co-developed-by: yuanjiong.gao <yuanjiong.gao@transsion.com> Signed-off-by: yuanjiong.gao <yuanjiong.gao@transsion.com> Signed-off-by: xianrong.zhou <xianrong.zhou@transsion.com> [EB: simplified the 'while' loops and improved the commit message] Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
GFP_KERNEL is not supposed to be or'd with GFP_NOFS (the result is equivalent to GFP_KERNEL). Also, we use GFP_NOIO instead of GFP_NOFS because we don't want any I/O being submitted in the direct reclaim path. Fixes: 39d13a1a ("dm crypt: reuse eboiv skcipher for IV generation") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Jeffle Xu authored
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
zhengbin authored
Fixes coccicheck warning: drivers/md/dm-thin-metadata.c:814:3-14: WARNING: Assignment of 0/1 to bool variable drivers/md/dm-thin-metadata.c:1109:1-12: WARNING: Assignment of 0/1 to bool variable drivers/md/dm-thin-metadata.c:1621:1-12: WARNING: Assignment of 0/1 to bool variable drivers/md/dm-thin-metadata.c:1652:1-12: WARNING: Assignment of 0/1 to bool variable drivers/md/dm-thin-metadata.c:1706:1-12: WARNING: Assignment of 0/1 to bool variable Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: zhengbin <zhengbin13@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
zhengbin authored
Fixes coccicheck warning: drivers/md/dm-snap.c:1064:3-18: WARNING: Assignment of 0/1 to bool variable drivers/md/dm-snap.c:1152:1-16: WARNING: Assignment of 0/1 to bool variable drivers/md/dm-snap.c:1317:1-16: WARNING: Assignment of 0/1 to bool variable Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: zhengbin <zhengbin13@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
zhengbin authored
Fixes coccicheck warning: drivers/md/dm-bio-prison-v2.c:327:2-22: WARNING: Assignment of 0/1 to bool variable Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: zhengbin <zhengbin13@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
zhengbin authored
Fixes coccicheck warning: drivers/md/dm-mpath.c:1447:2-13: WARNING: Assignment of 0/1 to bool variable Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: zhengbin <zhengbin13@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Dmitry Fomichev authored
dm-zoned is observed to log failed kernel assertions and not work correctly when operating against a device with a zone size smaller than 128MiB (e.g. 32768 bits per 4K block). The reason is that the bitmap size per zone is calculated as zero with such a small zone size. Fix this problem and also make the code related to zone bitmap management be able to handle per zone bitmaps smaller than a single block. A dm-zoned-tools patch is required to properly format dm-zoned devices with zone sizes smaller than 128MiB. Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Heinz Mauelshagen authored
raid_status() wasn't emitting rebuild flags on the table line properly because the rdev number was not yet set properly; index raid component devices array directly to solve. Also fix wrong argument count on emitted table line caused by 1 too many rebuild/write_mostly argument and consider any journal_(dev|mode) pairs. Link: https://bugzilla.redhat.com/1782045Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Bryan Gurney authored
In the dust_map_write() function, change the return code variable "ret" to "r", to match the convention of the other device-mapper targets. Signed-off-by: Bryan Gurney <bgurney@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 05 Jan, 2020 7 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linuxLinus Torvalds authored
Pull RISC-V fixes from Paul Walmsley: "Several fixes for RISC-V: - Fix function graph trace support - Prefix the CSR IRQ_* macro names with "RV_", to avoid collisions with macros elsewhere in the Linux kernel tree named "IRQ_TIMER" - Use __pa_symbol() when computing the physical address of a kernel symbol, rather than __pa() - Mark the RISC-V port as supporting GCOV One DT addition: - Describe the L2 cache controller in the FU540 DT file One documentation update: - Add patch acceptance guideline documentation" * tag 'riscv/for-v5.5-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: Documentation: riscv: add patch acceptance guidelines riscv: prefix IRQ_ macro names with an RV_ namespace clocksource: riscv: add notrace to riscv_sched_clock riscv: ftrace: correct the condition logic in function graph tracer riscv: dts: Add DT support for SiFive L2 cache controller riscv: gcov: enable gcov for RISC-V riscv: mm: use __pa_symbol for kernel symbols
-
Paul Walmsley authored
Formalize, in kernel documentation, the patch acceptance policy for arch/riscv. In summary, it states that as maintainers, we plan to only accept patches for new modules or extensions that have been frozen or ratified by the RISC-V Foundation. We've been following these guidelines for the past few months. In the meantime, we've received quite a bit of feedback that it would be helpful to have these guidelines formally documented. Based on a suggestion from Matthew Wilcox, we also add a link to this file to Documentation/process/index.rst, to make this document easier to find. The format of this document has also been changed to align to the format outlined in the maintainer entry profiles, in accordance with comments from Jon Corbet and Dan Williams. Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Krste Asanovic <krste@berkeley.edu> Cc: Andrew Waterman <waterman@eecs.berkeley.edu> Cc: Matthew Wilcox <willy@infradead.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Jonathan Corbet <corbet@lwn.net>
-
Paul Walmsley authored
"IRQ_TIMER", used in the arch/riscv CSR header file, is a sufficiently generic macro name that it's used by several source files across the Linux code base. Some of these other files ultimately include the arch/riscv CSR include file, causing collisions. Fix by prefixing the RISC-V csr.h IRQ_ macro names with an RV_ prefix. Fixes: a4c3733d ("riscv: abstract out CSR names for supervisor vs machine mode") Reported-by: Olof Johansson <olof@lixom.net> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
-
Zong Li authored
When enabling ftrace graph tracer, it gets the tracing clock in ftrace_push_return_trace(). Eventually, it invokes riscv_sched_clock() to get the clock value. If riscv_sched_clock() isn't marked with 'notrace', it will call ftrace_push_return_trace() and cause infinite loop. The result of failure as follow: command: echo function_graph >current_tracer [ 46.176787] Unable to handle kernel paging request at virtual address ffffffe04fb38c48 [ 46.177309] Oops [#1] [ 46.177478] Modules linked in: [ 46.177770] CPU: 0 PID: 256 Comm: $d Not tainted 5.5.0-rc1 #47 [ 46.177981] epc: ffffffe00035e59a ra : ffffffe00035e57e sp : ffffffe03a7569b0 [ 46.178216] gp : ffffffe000d29b90 tp : ffffffe03a756180 t0 : ffffffe03a756968 [ 46.178430] t1 : ffffffe00087f408 t2 : ffffffe03a7569a0 s0 : ffffffe03a7569f0 [ 46.178643] s1 : ffffffe00087f408 a0 : 0000000ac054cda4 a1 : 000000000087f411 [ 46.178856] a2 : 0000000ac054cda4 a3 : 0000000000373ca0 a4 : ffffffe04fb38c48 [ 46.179099] a5 : 00000000153e22a8 a6 : 00000000005522ff a7 : 0000000000000005 [ 46.179338] s2 : ffffffe03a756a90 s3 : ffffffe00032811c s4 : ffffffe03a756a58 [ 46.179570] s5 : ffffffe000d29fe0 s6 : 0000000000000001 s7 : 0000000000000003 [ 46.179809] s8 : 0000000000000003 s9 : 0000000000000002 s10: 0000000000000004 [ 46.180053] s11: 0000000000000000 t3 : 0000003fc815749c t4 : 00000000000efc90 [ 46.180293] t5 : ffffffe000d29658 t6 : 0000000000040000 [ 46.180482] status: 0000000000000100 badaddr: ffffffe04fb38c48 cause: 000000000000000f Signed-off-by: Zong Li <zong.li@sifive.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> [paul.walmsley@sifive.com: cleaned up patch description] Fixes: 92e0d143 ("clocksource/drivers/riscv_timer: Provide the sched_clock") Cc: stable@vger.kernel.org Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
-
Linus Torvalds authored
Merge misc fixes from Andrew Morton: "17 fixes" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: hexagon: define ioremap_uc ocfs2: fix the crash due to call ocfs2_get_dlm_debug once less ocfs2: call journal flush to mark journal as empty after journal recovery when mount mm/hugetlb: defer freeing of huge pages if in non-task context mm/gup: fix memory leak in __gup_benchmark_ioctl mm/oom: fix pgtables units mismatch in Killed process message fs/posix_acl.c: fix kernel-doc warnings hexagon: work around compiler crash hexagon: parenthesize registers in asm predicates fs/namespace.c: make to_mnt_ns() static fs/nsfs.c: include headers for missing declarations fs/direct-io.c: include fs/internal.h for missing prototype mm: move_pages: return valid node id in status if the page is already on the target node memcg: account security cred as well to kmemcg kcov: fix struct layout for kcov_remote_arg mm/zsmalloc.c: fix the migrated zspage statistics. mm/memory_hotplug: shrink zones when offlining memory
-
Linus Torvalds authored
Merge tag 'apparmor-pr-2020-01-04' of git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor Pull apparmor fixes from John Johansen: - performance regression: only get a label reference if the fast path check fails - fix aa_xattrs_match() may sleep while holding a RCU lock - fix bind mounts aborting with -ENOMEM * tag 'apparmor-pr-2020-01-04' of git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor: apparmor: fix aa_xattrs_match() may sleep while holding a RCU lock apparmor: only get a label reference if the fast path check fails apparmor: fix bind mounts aborting with -ENOMEM
-
- 04 Jan, 2020 1 commit
-
-
John Johansen authored
aa_xattrs_match() is unfortunately calling vfs_getxattr_alloc() from a context protected by an rcu_read_lock. This can not be done as vfs_getxattr_alloc() may sleep regardles of the gfp_t value being passed to it. Fix this by breaking the rcu_read_lock on the policy search when the xattr match feature is requested and restarting the search if a policy changes occur. Fixes: 8e51f908 ("apparmor: Add support for attaching profiles via xattr, presence and value") Reported-by: Jia-Ju Bai <baijiaju1990@gmail.com> Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: John Johansen <john.johansen@canonical.com>
-