1. 26 Jul, 2024 10 commits
    • Dev Jain's avatar
      selftests/mm: skip test for non-LPA2 and non-LVA systems · f556acc2
      Dev Jain authored
      Post my improvement of the test in e4a4ba41 ("selftests/mm:
      va_high_addr_switch: dynamically initialize testcases to enable LPA2
      testing"):
      
      The test begins to fail on 4k and 16k pages, on non-LPA2 systems.  To
      reduce noise in the CI systems, let us skip the test when higher address
      space is not implemented.
      
      Link: https://lkml.kernel.org/r/20240718052504.356517-1-dev.jain@arm.com
      Fixes: e4a4ba41 ("selftests/mm: va_high_addr_switch: dynamically initialize testcases to enable LPA2 testing")
      Signed-off-by: default avatarDev Jain <dev.jain@arm.com>
      Reviewed-by: default avatarRyan Roberts <ryan.roberts@arm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f556acc2
    • Li Zhijian's avatar
      mm/page_alloc: fix pcp->count race between drain_pages_zone() vs __rmqueue_pcplist() · 66eca102
      Li Zhijian authored
      It's expected that no page should be left in pcp_list after calling
      zone_pcp_disable() in offline_pages().  Previously, it's observed that
      offline_pages() gets stuck [1] due to some pages remaining in pcp_list.
      
      Cause:
      There is a race condition between drain_pages_zone() and __rmqueue_pcplist()
      involving the pcp->count variable. See below scenario:
      
               CPU0                              CPU1
          ----------------                    ---------------
                                            spin_lock(&pcp->lock);
                                            __rmqueue_pcplist() {
      zone_pcp_disable() {
                                              /* list is empty */
                                              if (list_empty(list)) {
                                                /* add pages to pcp_list */
                                                alloced = rmqueue_bulk()
        mutex_lock(&pcp_batch_high_lock)
        ...
        __drain_all_pages() {
          drain_pages_zone() {
            /* read pcp->count, it's 0 here */
            count = READ_ONCE(pcp->count)
            /* 0 means nothing to drain */
                                                /* update pcp->count */
                                                pcp->count += alloced << order;
            ...
                                            ...
                                            spin_unlock(&pcp->lock);
      
      In this case, after calling zone_pcp_disable() though, there are still some
      pages in pcp_list. And these pages in pcp_list are neither movable nor
      isolated, offline_pages() gets stuck as a result.
      
      Solution:
      Expand the scope of the pcp->lock to also protect pcp->count in
      drain_pages_zone(), to ensure no pages are left in the pcp list after
      zone_pcp_disable()
      
      [1] https://lore.kernel.org/linux-mm/6a07125f-e720-404c-b2f9-e55f3f166e85@fujitsu.com/
      
      Link: https://lkml.kernel.org/r/20240723064428.1179519-1-lizhijian@fujitsu.com
      Fixes: 4b23a68f ("mm/page_alloc: protect PCP lists with a spinlock")
      Signed-off-by: default avatarLi Zhijian <lizhijian@fujitsu.com>
      Reported-by: default avatarYao Xingtao <yaoxt.fnst@fujitsu.com>
      Reviewed-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      66eca102
    • Roman Gushchin's avatar
      mm: memcg: add cacheline padding after lruvec in mem_cgroup_per_node · f59adcf5
      Roman Gushchin authored
      Oliver Sand reported a performance regression caused by commit
      98c9daf5 ("mm: memcg: guard memcg1-specific members of struct
      mem_cgroup_per_node"), which puts some fields of the mem_cgroup_per_node
      structure under the CONFIG_MEMCG_V1 config option.  Apparently it causes a
      false cache sharing between lruvec and lru_zone_size members of the
      structure.  Fix it by adding an explicit padding after the lruvec member.
      
      Even though the padding is not required with CONFIG_MEMCG_V1 set, it seems
      like the introduced memory overhead is not significant enough to warrant
      another divergence in the mem_cgroup_per_node layout, so the padding is
      added unconditionally.
      
      Link: https://lkml.kernel.org/r/20240723171244.747521-1-roman.gushchin@linux.dev
      Fixes: 98c9daf5 ("mm: memcg: guard memcg1-specific members of struct mem_cgroup_per_node")
      Signed-off-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
      Reported-by: default avatarkernel test robot <oliver.sang@intel.com>
      Closes: https://lore.kernel.org/oe-lkp/202407121335.31a10cb6-oliver.sang@intel.comTested-by: default avatarOliver Sang <oliver.sang@intel.com>
      Acked-by: default avatarShakeel Butt <shakeel.butt@linux.dev>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f59adcf5
    • Suren Baghdasaryan's avatar
      alloc_tag: outline and export free_reserved_page() · b3bebe44
      Suren Baghdasaryan authored
      Outline and export free_reserved_page() because modules use it and it in
      turn uses page_ext_{get|put} which should not be exported.  The same
      result could be obtained by outlining {get|put}_page_tag_ref() but that
      would have higher performance impact as these functions are used in more
      performance critical paths.
      
      Link: https://lkml.kernel.org/r/20240717212844.2749975-1-surenb@google.com
      Fixes: dcfe378c ("lib: introduce support for page allocation tagging")
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Closes: https://lore.kernel.org/oe-kbuild-all/202407080044.DWMC9N9I-lkp@intel.com/Suggested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Suggested-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kent Overstreet <kent.overstreet@linux.dev>
      Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
      Cc: Sourav Panda <souravpanda@google.com>
      Cc: <stable@vger.kernel.org>	[6.10]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b3bebe44
    • Ross Lagerwall's avatar
      decompress_bunzip2: fix rare decompression failure · bf6acd5d
      Ross Lagerwall authored
      The decompression code parses a huffman tree and counts the number of
      symbols for a given bit length.  In rare cases, there may be >= 256
      symbols with a given bit length, causing the unsigned char to overflow. 
      This causes a decompression failure later when the code tries and fails to
      find the bit length for a given symbol.
      
      Since the maximum number of symbols is 258, use unsigned short instead.
      
      Link: https://lkml.kernel.org/r/20240717162016.1514077-1-ross.lagerwall@citrix.com
      Fixes: bc22c17e ("bzip2/lzma: library support for gzip, bzip2 and lzma decompression")
      Signed-off-by: default avatarRoss Lagerwall <ross.lagerwall@citrix.com>
      Cc: Alain Knaff <alain@knaff.lu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      bf6acd5d
    • Gavin Shan's avatar
      mm/huge_memory: avoid PMD-size page cache if needed · d659b715
      Gavin Shan authored
      xarray can't support arbitrary page cache size.  the largest and supported
      page cache size is defined as MAX_PAGECACHE_ORDER by commit 099d9064
      ("mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray").  However,
      it's possible to have 512MB page cache in the huge memory's collapsing
      path on ARM64 system whose base page size is 64KB.  512MB page cache is
      breaking the limitation and a warning is raised when the xarray entry is
      split as shown in the following example.
      
      [root@dhcp-10-26-1-207 ~]# cat /proc/1/smaps | grep KernelPageSize
      KernelPageSize:       64 kB
      [root@dhcp-10-26-1-207 ~]# cat /tmp/test.c
         :
      int main(int argc, char **argv)
      {
      	const char *filename = TEST_XFS_FILENAME;
      	int fd = 0;
      	void *buf = (void *)-1, *p;
      	int pgsize = getpagesize();
      	int ret = 0;
      
      	if (pgsize != 0x10000) {
      		fprintf(stdout, "System with 64KB base page size is required!\n");
      		return -EPERM;
      	}
      
      	system("echo 0 > /sys/devices/virtual/bdi/253:0/read_ahead_kb");
      	system("echo 1 > /proc/sys/vm/drop_caches");
      
      	/* Open the xfs file */
      	fd = open(filename, O_RDONLY);
      	assert(fd > 0);
      
      	/* Create VMA */
      	buf = mmap(NULL, TEST_MEM_SIZE, PROT_READ, MAP_SHARED, fd, 0);
      	assert(buf != (void *)-1);
      	fprintf(stdout, "mapped buffer at 0x%p\n", buf);
      
      	/* Populate VMA */
      	ret = madvise(buf, TEST_MEM_SIZE, MADV_NOHUGEPAGE);
      	assert(ret == 0);
      	ret = madvise(buf, TEST_MEM_SIZE, MADV_POPULATE_READ);
      	assert(ret == 0);
      
      	/* Collapse VMA */
      	ret = madvise(buf, TEST_MEM_SIZE, MADV_HUGEPAGE);
      	assert(ret == 0);
      	ret = madvise(buf, TEST_MEM_SIZE, MADV_COLLAPSE);
      	if (ret) {
      		fprintf(stdout, "Error %d to madvise(MADV_COLLAPSE)\n", errno);
      		goto out;
      	}
      
      	/* Split xarray entry. Write permission is needed */
      	munmap(buf, TEST_MEM_SIZE);
      	buf = (void *)-1;
      	close(fd);
      	fd = open(filename, O_RDWR);
      	assert(fd > 0);
      	fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE,
       		  TEST_MEM_SIZE - pgsize, pgsize);
      out:
      	if (buf != (void *)-1)
      		munmap(buf, TEST_MEM_SIZE);
      	if (fd > 0)
      		close(fd);
      
      	return ret;
      }
      
      [root@dhcp-10-26-1-207 ~]# gcc /tmp/test.c -o /tmp/test
      [root@dhcp-10-26-1-207 ~]# /tmp/test
       ------------[ cut here ]------------
       WARNING: CPU: 25 PID: 7560 at lib/xarray.c:1025 xas_split_alloc+0xf8/0x128
       Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib    \
       nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct      \
       nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4      \
       ip_set rfkill nf_tables nfnetlink vfat fat virtio_balloon drm fuse   \
       xfs libcrc32c crct10dif_ce ghash_ce sha2_ce sha256_arm64 virtio_net  \
       sha1_ce net_failover virtio_blk virtio_console failover dimlib virtio_mmio
       CPU: 25 PID: 7560 Comm: test Kdump: loaded Not tainted 6.10.0-rc7-gavin+ #9
       Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-1.el9 05/24/2024
       pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
       pc : xas_split_alloc+0xf8/0x128
       lr : split_huge_page_to_list_to_order+0x1c4/0x780
       sp : ffff8000ac32f660
       x29: ffff8000ac32f660 x28: ffff0000e0969eb0 x27: ffff8000ac32f6c0
       x26: 0000000000000c40 x25: ffff0000e0969eb0 x24: 000000000000000d
       x23: ffff8000ac32f6c0 x22: ffffffdfc0700000 x21: 0000000000000000
       x20: 0000000000000000 x19: ffffffdfc0700000 x18: 0000000000000000
       x17: 0000000000000000 x16: ffffd5f3708ffc70 x15: 0000000000000000
       x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
       x11: ffffffffffffffc0 x10: 0000000000000040 x9 : ffffd5f3708e692c
       x8 : 0000000000000003 x7 : 0000000000000000 x6 : ffff0000e0969eb8
       x5 : ffffd5f37289e378 x4 : 0000000000000000 x3 : 0000000000000c40
       x2 : 000000000000000d x1 : 000000000000000c x0 : 0000000000000000
       Call trace:
        xas_split_alloc+0xf8/0x128
        split_huge_page_to_list_to_order+0x1c4/0x780
        truncate_inode_partial_folio+0xdc/0x160
        truncate_inode_pages_range+0x1b4/0x4a8
        truncate_pagecache_range+0x84/0xa0
        xfs_flush_unmap_range+0x70/0x90 [xfs]
        xfs_file_fallocate+0xfc/0x4d8 [xfs]
        vfs_fallocate+0x124/0x2f0
        ksys_fallocate+0x4c/0xa0
        __arm64_sys_fallocate+0x24/0x38
        invoke_syscall.constprop.0+0x7c/0xd8
        do_el0_svc+0xb4/0xd0
        el0_svc+0x44/0x1d8
        el0t_64_sync_handler+0x134/0x150
        el0t_64_sync+0x17c/0x180
      
      Fix it by correcting the supported page cache orders, different sets for
      DAX and other files.  With it corrected, 512MB page cache becomes
      disallowed on all non-DAX files on ARM64 system where the base page size
      is 64KB.  After this patch is applied, the test program fails with error
      -EINVAL returned from __thp_vma_allowable_orders() and the madvise()
      system call to collapse the page caches.
      
      Link: https://lkml.kernel.org/r/20240715000423.316491-1-gshan@redhat.com
      Fixes: 6b24ca4a ("mm: Use multi-index entries in the page cache")
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarRyan Roberts <ryan.roberts@arm.com>
      Acked-by: default avatarZi Yan <ziy@nvidia.com>
      Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
      Cc: Barry Song <baohua@kernel.org>
      Cc: Don Dutile <ddutile@redhat.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: <stable@vger.kernel.org>	[5.17+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d659b715
    • Yang Shi's avatar
      mm: huge_memory: use !CONFIG_64BIT to relax huge page alignment on 32 bit machines · d9592025
      Yang Shi authored
      Yves-Alexis Perez reported commit 4ef9ad19 ("mm: huge_memory: don't
      force huge page alignment on 32 bit") didn't work for x86_32 [1].  It is
      because x86_32 uses CONFIG_X86_32 instead of CONFIG_32BIT.
      
      !CONFIG_64BIT should cover all 32 bit machines.
      
      [1] https://lore.kernel.org/linux-mm/CAHbLzkr1LwH3pcTgM+aGQ31ip2bKqiqEQ8=FQB+t2c3dhNKNHA@mail.gmail.com/
      
      Link: https://lkml.kernel.org/r/20240712155855.1130330-1-yang@os.amperecomputing.com
      Fixes: 4ef9ad19 ("mm: huge_memory: don't force huge page alignment on 32 bit")
      Signed-off-by: default avatarYang Shi <yang@os.amperecomputing.com>
      Reported-by: default avatarYves-Alexis Perez <corsac@debian.org>
      Tested-by: default avatarYves-Alexis Perez <corsac@debian.org>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Jiri Slaby <jirislaby@kernel.org>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Salvatore Bonaccorso <carnil@debian.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: <stable@vger.kernel.org>	[6.8+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d9592025
    • Ram Tummala's avatar
      mm: fix old/young bit handling in the faulting path · 4cd7ba16
      Ram Tummala authored
      Commit 3bd786f7 ("mm: convert do_set_pte() to set_pte_range()")
      replaced do_set_pte() with set_pte_range() and that introduced a
      regression in the following faulting path of non-anonymous vmas which
      caused the PTE for the faulting address to be marked as old instead of
      young.
      
      handle_pte_fault()
        do_pte_missing()
          do_fault()
            do_read_fault() || do_cow_fault() || do_shared_fault()
              finish_fault()
                set_pte_range()
      
      The polarity of prefault calculation is incorrect.  This leads to prefault
      being incorrectly set for the faulting address.  The following check will
      incorrectly mark the PTE old rather than young.  On some architectures
      this will cause a double fault to mark it young when the access is
      retried.
      
          if (prefault && arch_wants_old_prefaulted_pte())
              entry = pte_mkold(entry);
      
      On a subsequent fault on the same address, the faulting path will see a
      non NULL vmf->pte and instead of reaching the do_pte_missing() path, PTE
      will then be correctly marked young in handle_pte_fault() itself.
      
      Due to this bug, performance degradation in the fault handling path will
      be observed due to unnecessary double faulting.
      
      Link: https://lkml.kernel.org/r/20240710014539.746200-1-rtummala@nvidia.com
      Fixes: 3bd786f7 ("mm: convert do_set_pte() to set_pte_range()")
      Signed-off-by: default avatarRam Tummala <rtummala@nvidia.com>
      Reviewed-by: default avatarYin Fengwei <fengwei.yin@intel.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4cd7ba16
    • James Clark's avatar
      dt-bindings: arm: update James Clark's email address · 34e526f6
      James Clark authored
      My new address is james.clark@linaro.org
      
      Link: https://lkml.kernel.org/r/20240709102512.31212-3-james.clark@linaro.orgSigned-off-by: default avatarJames Clark <james.clark@linaro.org>
      Cc: Bjorn Andersson <quic_bjorande@quicinc.com>
      Cc: Conor Dooley <conor+dt@kernel.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Geliang Tang <geliang@kernel.org>
      Cc: Hao Zhang <quic_hazha@quicinc.com>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Krzysztof Kozlowski <krzk+dt@kernel.org>
      Cc: Mao Jinlong <quic_jinlmao@quicinc.com>
      Cc: Matthieu Baerts <matttbe@kernel.org>
      Cc: Matt Ranostay <matt@ranostay.sg>
      Cc: Mike Leach <mike.leach@linaro.org>
      Cc: Oleksij Rempel <o.rempel@pengutronix.de>
      Cc: Rob Herring (Arm) <robh@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      34e526f6
    • James Clark's avatar
      MAINTAINERS: mailmap: update James Clark's email address · 5bf6f3c5
      James Clark authored
      My new address is james.clark@linaro.org
      
      Link: https://lkml.kernel.org/r/20240709102512.31212-2-james.clark@linaro.orgSigned-off-by: default avatarJames Clark <james.clark@linaro.org>
      Cc: Bjorn Andersson <quic_bjorande@quicinc.com>
      Cc: Conor Dooley <conor+dt@kernel.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Geliang Tang <geliang@kernel.org>
      Cc: Hao Zhang <quic_hazha@quicinc.com>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Krzysztof Kozlowski <krzk+dt@kernel.org>
      Cc: Mao Jinlong <quic_jinlmao@quicinc.com>
      Cc: Matthieu Baerts <matttbe@kernel.org>
      Cc: Matt Ranostay <matt@ranostay.sg>
      Cc: Mike Leach <mike.leach@linaro.org>
      Cc: Oleksij Rempel <o.rempel@pengutronix.de>
      Cc: Rob Herring (Arm) <robh@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      5bf6f3c5
  2. 25 Jul, 2024 25 commits
  3. 24 Jul, 2024 5 commits
    • Linus Torvalds's avatar
      Merge tag 'phy-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy · c33ffdb7
      Linus Torvalds authored
      Pull phy updates from Vinod Koul:
       "New Support
         - Samsung Exynos gs101 drd combo phy
         - Qualcomm SC8180x USB uniphy, IPQ9574 QMP PCIe phy
         - Airoha EN7581 PCIe phy
         - Freescale i.MX8Q HSIO SerDes phy
         - Starfive jh7110 dphy tx
      
        Updates:
         - Resume support for j721e-wiz driver
         - Updates to Exynos usbdrd driver
         - Support for optional power domains in g12a usb2-phy driver
         - Debugfs support and updates to zynqmp driver"
      
      * tag 'phy-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy: (56 commits)
        phy: airoha: Add dtime and Rx AEQ IO registers
        dt-bindings: phy: airoha: Add dtime and Rx AEQ IO registers
        dt-bindings: phy: rockchip-emmc-phy: Convert to dtschema
        dt-bindings: phy: qcom,qmp-usb: fix spelling error
        phy: exynos5-usbdrd: support Exynos USBDRD 3.1 combo phy (HS & SS)
        phy: exynos5-usbdrd: convert Vbus supplies to regulator_bulk
        phy: exynos5-usbdrd: convert (phy) register access clock to clk_bulk
        phy: exynos5-usbdrd: convert core clocks to clk_bulk
        phy: exynos5-usbdrd: support isolating HS and SS ports independently
        dt-bindings: phy: samsung,usb3-drd-phy: add gs101 compatible
        phy: core: Fix documentation of of_phy_get
        phy: starfive: Correct the dphy configure process
        phy: zynqmp: Add debugfs support
        phy: zynqmp: Take the phy mutex in xlate
        phy: zynqmp: Only wait for PLL lock "primary" instances
        phy: zynqmp: Store instance instead of type
        phy: zynqmp: Enable reference clock correctly
        phy: cadence-torrent: Check return value on register read
        phy: Fix the cacography in phy-exynos5250-usb2.c
        phy: phy-rockchip-samsung-hdptx: Select CONFIG_MFD_SYSCON
        ...
      c33ffdb7
    • Linus Torvalds's avatar
      Merge tag 'soundwire-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire · ad7b0b7b
      Linus Torvalds authored
      Pull soundwire updates from Vinod Koul:
      
       - Simplification across subsystem using cleanup.h
      
       - Support for debugfs to read/write commands
      
       - Few Intel and Qualcomm driver updates
      
      * tag 'soundwire-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire:
        soundwire: debugfs: simplify with cleanup.h
        soundwire: cadence: simplify with cleanup.h
        soundwire: intel_ace2x: simplify with cleanup.h
        soundwire: intel_ace2x: simplify return path in hw_params
        soundwire: intel: simplify with cleanup.h
        soundwire: intel: simplify return path in hw_params
        soundwire: amd_init: simplify with cleanup.h
        soundwire: amd: simplify with cleanup.h
        soundwire: amd: simplify return path in hw_params
        soundwire: intel_auxdevice: start the bus at default frequency
        soundwire: intel_auxdevice: add cs42l43 codec to wake_capable_list
        drivers:soundwire: qcom: cleanup port maask calculations
        soundwire: bus: simplify by using local slave->prop
        soundwire: generic_bandwidth_allocation: change port_bo parameter to pointer
        soundwire: Intel: clarify Copyright information
        soundwire: intel_ace2.x: add AC timing extensions for PantherLake
        soundwire: bus: add stream refcount
        soundwire: debugfs: add interface to read/write commands
      ad7b0b7b
    • Linus Torvalds's avatar
      Merge tag 'dmaengine-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine · 7a46b17d
      Linus Torvalds authored
      Pull dmaengine updates from Vinod Koul:
       "New support:
      
         - New dmaengine_prep_peripheral_dma_vec() to support transfers using
           dma vectors and documentation and user in AXI dma
      
         - STMicro STM32 DMA3 support and new capabilities of cyclic dma
      
        Updates:
      
         - Yaml conversion for Freescale imx dma and qdma bindings,
           sprd sc9860 dma binding
      
         - Altera msgdma updates for descriptor management"
      
      * tag 'dmaengine-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (35 commits)
        dt-bindings: fsl-qdma: fix interrupts 'if' check logic
        dt-bindings: dma: sprd,sc9860-dma: convert to YAML
        dmaengine: fsl-dpaa2-qdma: add missing MODULE_DESCRIPTION() macro
        dmaengine: ti: add missing MODULE_DESCRIPTION() macros
        dmaengine: ti: cppi41: add missing MODULE_DESCRIPTION() macro
        dmaengine: virt-dma: add missing MODULE_DESCRIPTION() macro
        dmaengine: ti: k3-udma: Fix BCHAN count with UHC and HC channels
        dmaengine: sh: rz-dmac: Fix lockdep assert warning
        dmaengine: qcom: gpi: clean up the IRQ disable/enable in gpi_reset_chan()
        dmaengine: fsl-edma: change the memory access from local into remote mode in i.MX 8QM
        dmaengine: qcom: gpi: remove unused struct 'reg_info'
        dmaengine: moxart-dma: remove unused struct 'moxart_filter_data'
        dt-bindings: fsl-qdma: Convert to yaml format
        dmaengine: fsl-edma: remove redundant "idle" field from fsl_chan
        dmaengine: fsl-edma: request per-channel IRQ only when channel is allocated
        dmaengine: stm32-dma3: defer channel registration to specify channel name
        dmaengine: add channel device name to channel registration
        dmaengine: stm32-dma3: improve residue granularity
        dmaengine: stm32-dma3: add device_pause and device_resume ops
        dmaengine: stm32-dma3: add DMA_MEMCPY capability
        ...
      7a46b17d
    • Joel Granados's avatar
      sysctl: treewide: constify the ctl_table argument of proc_handlers · 78eb4ea2
      Joel Granados authored
      const qualify the struct ctl_table argument in the proc_handler function
      signatures. This is a prerequisite to moving the static ctl_table
      structs into .rodata data which will ensure that proc_handler function
      pointers cannot be modified.
      
      This patch has been generated by the following coccinelle script:
      
      ```
        virtual patch
      
        @r1@
        identifier ctl, write, buffer, lenp, ppos;
        identifier func !~ "appldata_(timer|interval)_handler|sched_(rt|rr)_handler|rds_tcp_skbuf_handler|proc_sctp_do_(hmac_alg|rto_min|rto_max|udp_port|alpha_beta|auth|probe_interval)";
        @@
      
        int func(
        - struct ctl_table *ctl
        + const struct ctl_table *ctl
          ,int write, void *buffer, size_t *lenp, loff_t *ppos);
      
        @r2@
        identifier func, ctl, write, buffer, lenp, ppos;
        @@
      
        int func(
        - struct ctl_table *ctl
        + const struct ctl_table *ctl
          ,int write, void *buffer, size_t *lenp, loff_t *ppos)
        { ... }
      
        @r3@
        identifier func;
        @@
      
        int func(
        - struct ctl_table *
        + const struct ctl_table *
          ,int , void *, size_t *, loff_t *);
      
        @r4@
        identifier func, ctl;
        @@
      
        int func(
        - struct ctl_table *ctl
        + const struct ctl_table *ctl
          ,int , void *, size_t *, loff_t *);
      
        @r5@
        identifier func, write, buffer, lenp, ppos;
        @@
      
        int func(
        - struct ctl_table *
        + const struct ctl_table *
          ,int write, void *buffer, size_t *lenp, loff_t *ppos);
      
      ```
      
      * Code formatting was adjusted in xfs_sysctl.c to comply with code
        conventions. The xfs_stats_clear_proc_handler,
        xfs_panic_mask_proc_handler and xfs_deprecated_dointvec_minmax where
        adjusted.
      
      * The ctl_table argument in proc_watchdog_common was const qualified.
        This is called from a proc_handler itself and is calling back into
        another proc_handler, making it necessary to change it as part of the
        proc_handler migration.
      Co-developed-by: default avatarThomas Weißschuh <linux@weissschuh.net>
      Signed-off-by: default avatarThomas Weißschuh <linux@weissschuh.net>
      Co-developed-by: default avatarJoel Granados <j.granados@samsung.com>
      Signed-off-by: default avatarJoel Granados <j.granados@samsung.com>
      78eb4ea2
    • Linus Torvalds's avatar
      Merge tag 'random-6.11-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random · 7a3fad30
      Linus Torvalds authored
      Pull random number generator updates from Jason Donenfeld:
       "This adds getrandom() support to the vDSO.
      
        First, it adds a new kind of mapping to mmap(2), MAP_DROPPABLE, which
        lets the kernel zero out pages anytime under memory pressure, which
        enables allocating memory that never gets swapped to disk but also
        doesn't count as being mlocked.
      
        Then, the vDSO implementation of getrandom() is introduced in a
        generic manner and hooked into random.c.
      
        Next, this is implemented on x86. (Also, though it's not ready for
        this pull, somebody has begun an arm64 implementation already)
      
        Finally, two vDSO selftests are added.
      
        There are also two housekeeping cleanup commits"
      
      * tag 'random-6.11-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
        MAINTAINERS: add random.h headers to RNG subsection
        random: note that RNDGETPOOL was removed in 2.6.9-rc2
        selftests/vDSO: add tests for vgetrandom
        x86: vdso: Wire up getrandom() vDSO implementation
        random: introduce generic vDSO getrandom() implementation
        mm: add MAP_DROPPABLE for designating always lazily freeable mappings
      7a3fad30