- 07 Nov, 2023 13 commits
-
-
Evan Green authored
In /proc/cpuinfo, most of the information we show for each processor is specific to that hart: marchid, mvendorid, mimpid, processor, hart, compatible, and the mmu size. But the ISA string gets filtered through a lowest common denominator mask, so that if one CPU is missing an ISA extension, no CPUs will show it. Now that we track the ISA extensions for each hart, let's report ISA extension info accurately per-hart in /proc/cpuinfo. We cannot change the "isa:" line, as usermode may be relying on that line to show only the common set of extensions supported across all harts. Add a new "hart isa" line instead, which reports the true set of extensions for that hart. Signed-off-by: Evan Green <evan@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231106232439.3176268-1-evan@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Without this I get a bunch of warnings along the lines of arch/riscv/kernel/module.c:535:26: error: positional initialization of field in 'struct' declared with 'designated_init' attribute [-Werror=designated-init] 535 | [R_RISCV_32] = { apply_r_riscv_32_rela }, This just mades the member initializers explicit instead of positional. I also aligned some of the table, but mostly just to make the batch editing go faster. Fixes: b51fc88c ("Merge patch series "riscv: Add remaining module relocations and tests"") Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231107155529.8368-1-palmer@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Charlie Jenkins <charlie@rivosinc.com> says: A handful of module relocations were missing, this patch includes the remaining ones. I also wrote some test cases to ensure that module loading works properly. Some relocations cannot be supported in the kernel, these include the ones that rely on thread local storage and dynamic linking. This patch also overhauls the implementation of ADD/SUB/SET/ULEB128 relocations to handle overflow. "Overflow" is different for ULEB128 since it is a variable-length encoding that the compiler can be expected to generate enough space for. Instead of overflowing, ULEB128 will expand into the next 8-bit segment of the location. A psABI proposal [1] was merged that mandates that SET_ULEB128 and SUB_ULEB128 are paired, however the discussion following the merging of the pull request revealed that while the pull request was valid, it would be better for linkers to properly handle this overflow. This patch proactively implements this methodology for future compatibility. This can be tested by enabling KUNIT, RUNTIME_KERNEL_TESTING_MENU, and RISCV_MODULE_LINKING_KUNIT. [1] https://github.com/riscv-non-isa/riscv-elf-psabi-doc/pull/403 * b4-shazam-merge: riscv: Add tests for riscv module loading riscv: Add remaining module relocations riscv: Avoid unaligned access when relocating modules Link: https://lore.kernel.org/r/20231101-module_relocations-v9-0-8dfa3483c400@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Charlie Jenkins authored
Add test cases for the two main groups of relocations added: SUB and SET, along with uleb128. Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231101-module_relocations-v9-3-8dfa3483c400@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Charlie Jenkins authored
Add all final module relocations and add error logs explaining the ones that are not supported. Implement overflow checks for ADD/SUB/SET/ULEB128 relocations. Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231101-module_relocations-v9-2-8dfa3483c400@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Emil Renner Berthing authored
With the C-extension regular 32bit instructions are not necessarily aligned on 4-byte boundaries. RISC-V instructions are in fact an ordered list of 16bit little-endian "parcels", so access the instruction as such. This should also make the code work in case someone builds a big-endian RISC-V machine. Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231101-module_relocations-v9-1-8dfa3483c400@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christoph Hellwig authored
The cache ops are also used by the pmem code which is unconditionally built into the kernel. Move them into a separate file that is built based on the correct config option. Fixes: fd962781 ("riscv: RISCV_NONSTANDARD_CACHE_OPS shouldn't depend on RISCV_DMA_NONCOHERENT") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Tested-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # Link: https://lore.kernel.org/r/20231028155101.1039049-1-hch@lst.deSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Alexandre Ghiti <alexghiti@rivosinc.com> says: This series optimizes the tlb flushes on riscv which used to simply flush the whole tlb whatever the size of the range to flush or the size of the stride. Patch 3 introduces a threshold that is microarchitecture specific and will very likely be modified by vendors, not sure though which mechanism we'll use to do that (dt? alternatives? vendor initialization code?). * b4-shazam-merge: riscv: Improve flush_tlb_kernel_range() riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlb riscv: Improve flush_tlb_range() for hugetlb pages riscv: Improve tlb_flush() Link: https://lore.kernel.org/r/20231030133027.19542-1-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
This function used to simply flush the whole tlb of all harts, be more subtile and try to only flush the range. The problem is that we can only use PAGE_SIZE as stride since we don't know the size of the underlying mapping and then this function will be improved only if the size of the region to flush is < threshold * PAGE_SIZE. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20231030133027.19542-5-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
Currently, when the range to flush covers more than one page (a 4K page or a hugepage), __flush_tlb_range() flushes the whole tlb. Flushing the whole tlb comes with a greater cost than flushing a single entry so we should flush single entries up to a certain threshold so that: threshold * cost of flushing a single entry < cost of flushing the whole tlb. Co-developed-by: Mayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20231030133027.19542-4-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form, when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the whole tlb: so set a stride of the size of the hugetlb mapping in order to only flush the hugetlb mapping. However, if the hugepage is a NAPOT region, all PTEs that constitute this mapping must be invalidated, so the stride size must actually be the size of the PTE. Note that THPs are directly handled by flush_pmd_tlb_range(). Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Link: https://lore.kernel.org/r/20231030133027.19542-3-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
For now, tlb_flush() simply calls flush_tlb_mm() which results in a flush of the whole TLB. So let's use mmu_gather fields to provide a more fine-grained flush of the TLB. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Link: https://lore.kernel.org/r/20231030133027.19542-2-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andreas Schwab authored
This adds a separate segment for kernel text in /proc/kcore, which has a different address than the direct linear map. Signed-off-by: Andreas Schwab <schwab@suse.de> Link: https://lore.kernel.org/r/mvmh6m758ao.fsf@suse.deSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
- 06 Nov, 2023 9 commits
-
-
Clément Léger authored
Some data were incorrectly annotated with SYM_FUNC_*() instead of SYM_DATA_*() ones. Use the correct ones. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231024132655.730417-4-cleger@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Clément Léger authored
ENTRY()/END()/WEAK() macros are deprecated and we should make use of the new SYM_*() macros [1] for better annotation of symbols. Replace the deprecated ones with the new ones and fix wrong usage of END()/ENDPROC() to correctly describe the symbols. [1] https://docs.kernel.org/core-api/asm-annotations.htmlSigned-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231024132655.730417-3-cleger@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Clément Léger authored
For the sake of coherency, use local labels in assembly when applicable. This also avoid kprobes being confused when applying a kprobe since the size of function is computed by checking where the next visible symbol is located. This might end up in computing some function size to be way shorter than expected and thus failing to apply kprobes to the specified offset. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231024132655.730417-2-cleger@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Geert Uytterhoeven authored
When flashing loader.bin for K210 using kflash: [ERROR] This is an ELF file and cannot be programmed to flash directly: arch/riscv/boot/loader.bin Before, loader.bin relied on "OBJCOPYFLAGS := -O binary" in the main RISC-V Makefile to create a boot image with the right format. With this removed, the image is now created in the wrong (ELF) format. Fix this by adding an explicit rule. Fixes: 505b0295 ("riscv: Remove duplicate objcopy flag") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/1086025809583809538dfecaa899892218f44e7e.1698159066.git.geert+renesas@glider.beSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Alexandre Ghiti <alexghiti@rivosinc.com> says: This series optimizes the tlb flushes on riscv which used to simply flush the whole tlb whatever the size of the range to flush or the size of the stride. Patch 3 introduces a threshold that is microarchitecture specific and will very likely be modified by vendors, not sure though which mechanism we'll use to do that (dt? alternatives? vendor initialization code?). * b4-shazam-merge: riscv: Improve flush_tlb_kernel_range() riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlb riscv: Improve flush_tlb_range() for hugetlb pages riscv: Improve tlb_flush() Link: https://lore.kernel.org/r/20231030133027.19542-1-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
This function used to simply flush the whole tlb of all harts, be more subtile and try to only flush the range. The problem is that we can only use PAGE_SIZE as stride since we don't know the size of the underlying mapping and then this function will be improved only if the size of the region to flush is < threshold * PAGE_SIZE. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20231030133027.19542-5-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
Currently, when the range to flush covers more than one page (a 4K page or a hugepage), __flush_tlb_range() flushes the whole tlb. Flushing the whole tlb comes with a greater cost than flushing a single entry so we should flush single entries up to a certain threshold so that: threshold * cost of flushing a single entry < cost of flushing the whole tlb. Co-developed-by: Mayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20231030133027.19542-4-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form, when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the whole tlb: so set a stride of the size of the hugetlb mapping in order to only flush the hugetlb mapping. However, if the hugepage is a NAPOT region, all PTEs that constitute this mapping must be invalidated, so the stride size must actually be the size of the PTE. Note that THPs are directly handled by flush_pmd_tlb_range(). Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Link: https://lore.kernel.org/r/20231030133027.19542-3-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
For now, tlb_flush() simply calls flush_tlb_mm() which results in a flush of the whole TLB. So let's use mmu_gather fields to provide a more fine-grained flush of the TLB. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Link: https://lore.kernel.org/r/20231030133027.19542-2-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
- 05 Nov, 2023 17 commits
-
-
Jisheng Zhang authored
Update T-Head memory type definitions according to C910 doc [1] For NC and IO, SH property isn't configurable, hardcoded as SH, so set SH for NOCACHE and IO. And also set bit[61](Bufferable) for NOCACHE according to the table 6.1 in the doc [1]. Link: https://github.com/T-head-Semi/openc910 [1] Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Guo Ren <guoren@kernel.org> Tested-by: Drew Fustini <dfustini@baylibre.com> Link: https://lore.kernel.org/r/20230912072510.2510-1-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Jisheng Zhang <jszhang@kernel.org> says: This series renews one of my last year RFC patch[1], tries to improve the vdso layout a bit. patch1 removes useless symbols patch2 merges .data section of vdso into .rodata because they are readonly patch3 is the real renew patch, it removes hardcoded 0x800 .text start addr. But I rewrite the commit msg per Andrew's suggestions and move move .note, .eh_frame_hdr, and .eh_frame between .rodata and .text to keep the actual code well away from the non-instruction data. * b4-shazam-merge: riscv: vdso.lds.S: remove hardcoded 0x800 .text start addr riscv: vdso.lds.S: merge .data section into .rodata section riscv: vdso.lds.S: drop __alt_start and __alt_end symbols Link: https://lore.kernel.org/linux-riscv/20221123161805.1579-1-jszhang@kernel.org/ [1] Link: https://lore.kernel.org/r/20230912072015.2424-1-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
I believe the hardcoded 0x800 and related comments come from the long history VDSO_TEXT_OFFSET in x86 vdso code, but commit 5b930493 ("x86 vDSO: generate vdso-syms.lds") and commit f6b46ebf ("x86 vDSO: new layout") removes the comment and hard coding for x86. Similar as x86 and other arch, riscv doesn't need the rigid layout using VDSO_TEXT_OFFSET since it "no longer matters to the kernel". so we could remove the hard coding now, and removing it brings a small vdso.so and aligns with other architectures. Also, having enough separation between data and text is important for I-cache, so similar as x86, move .note, .eh_frame_hdr, and .eh_frame between .rodata and .text. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Link: https://lore.kernel.org/r/20230912072015.2424-4-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
The .data section doesn't need to be separate from .rodata section, they are both readonly. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Link: https://lore.kernel.org/r/20230912072015.2424-3-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
These two symbols are not used, remove them. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Link: https://lore.kernel.org/r/20230912072015.2424-2-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Yunhui Cui authored
Add userland instruction dump and rename dump_kernel_instr() to dump_instr(). An example: [ 0.822439] Freeing unused kernel image (initmem) memory: 6916K [ 0.823817] Run /init as init process [ 0.839411] init[1]: unhandled signal 4 code 0x1 at 0x000000000005be18 in bb[10000+5fb000] [ 0.840751] CPU: 0 PID: 1 Comm: init Not tainted 5.14.0-rc4-00049-gbd644290aa72-dirty #187 [ 0.841373] Hardware name: , BIOS [ 0.841743] epc : 000000000005be18 ra : 0000000000079e74 sp : 0000003fffcafda0 [ 0.842271] gp : ffffffff816e9dc8 tp : 0000000000000000 t0 : 0000000000000000 [ 0.842947] t1 : 0000003fffc9fdf0 t2 : 0000000000000000 s0 : 0000000000000000 [ 0.843434] s1 : 0000000000000000 a0 : 0000003fffca0190 a1 : 0000003fffcafe18 [ 0.843891] a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 [ 0.844357] a5 : 0000000000000000 a6 : 0000000000000000 a7 : 0000000000000000 [ 0.844803] s2 : 0000000000000000 s3 : 0000000000000000 s4 : 0000000000000000 [ 0.845253] s5 : 0000000000000000 s6 : 0000000000000000 s7 : 0000000000000000 [ 0.845722] s8 : 0000000000000000 s9 : 0000000000000000 s10: 0000000000000000 [ 0.846180] s11: 0000000000d144e0 t3 : 0000000000000000 t4 : 0000000000000000 [ 0.846616] t5 : 0000000000000000 t6 : 0000000000000000 [ 0.847204] status: 0000000200000020 badaddr: 00000000f0028053 cause: 0000000000000002 [ 0.848219] Code: f06f ff5f 3823 fa11 0113 fb01 2e23 0201 0293 0000 (8053) f002 [ 0.851016] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004 Signed-off-by: Yunhui Cui <cuiyunhui@bytedance.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230912021349.28302-1-cuiyunhui@bytedance.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Nam Cao authored
Instructions can write to x0, so we should simulate these instructions normally. Currently, the kernel hangs if an instruction who writes to x0 is simulated. Fixes: c22b0bcb ("riscv: Add kprobes supported") Cc: stable@vger.kernel.org Signed-off-by: Nam Cao <namcaov@gmail.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Acked-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20230829182500.61875-1-namcaov@gmail.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Nam Cao authored
uprobes expects is_trap_insn() to return true for any trap instructions, not just the one used for installing uprobe. The current default implementation only returns true for 16-bit c.ebreak if C extension is enabled. This can confuse uprobes if a 32-bit ebreak generates a trap exception from userspace: uprobes asks is_trap_insn() who says there is no trap, so uprobes assume a probe was there before but has been removed, and return to the trap instruction. This causes an infinite loop of entering and exiting trap handler. Instead of using the default implementation, implement this function speficially for riscv with checks for both ebreak and c.ebreak. Fixes: 74784081 ("riscv: Add uprobes supported") Signed-off-by: Nam Cao <namcaov@gmail.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Reviewed-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20230829083614.117748-1-namcaov@gmail.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Yu Chien Peter Lin <peterlin@andestech.com> says: This patchset enhances PTDUMP by providing additional information from pagetable entries. The first patch fixes the RSW field, while the second and third patches introduce the PBMT and NAPOT fields, respectively, for RV64 systems. * b4-shazam-merge: riscv: Introduce NAPOT field to PTDUMP riscv: Introduce PBMT field to PTDUMP riscv: Improve PTDUMP to show RSW with non-zero value Link: https://lore.kernel.org/r/20230921025022.3989723-1-peterlin@andestech.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Yu Chien Peter Lin authored
This patch introduces the NAPOT field to PTDUMP, allowing it to display the letter "N" for pages that have the 63rd bit set. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230921025022.3989723-4-peterlin@andestech.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Yu Chien Peter Lin authored
This patch introduces the PBMT field to the PTDUMP, so it can display the memory attributes for NC or IO. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230921025022.3989723-3-peterlin@andestech.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Yu Chien Peter Lin authored
RSW field can be used to encode 2 bits of software defined information. Currently, PTDUMP only prints "RSW" when its value is 1 or 3. To fix this issue and improve the debugging experience with PTDUMP, we redefine _PAGE_SPECIAL to its original value and use _PAGE_SOFT as the RSW mask, allow it to print the RSW with any non-zero value. This patch also removes the val from the struct prot_bits as it is no longer needed. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230921025022.3989723-2-peterlin@andestech.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Conor Dooley authored
The CMO op macros initially used lower case, as the original iteration of the ALT_CMO_OP alternative stringified the first parameter to finalise the assembly for the standard variant. As a knock-on, the T-Head versions of these CMOs had to use mixed case defines. Commit dd23e953 ("RISC-V: replace cbom instructions with an insn-def") removed the asm construction with stringify, replacing it an insn-def macro, rending the lower-case surplus to requirements. As far as I can tell from a brief check, CBO_zero does not see similar use and didn't require the mixed case define in the first place. Replace the lower case characters now for consistency with other insn-def macros in the standard and T-Head forms, and adjust the callsites. Suggested-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20230915-aloe-dollar-994937477776@spudSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
If misaligned_access_speed percpu var isn't so called "HWPROBE MISALIGNED UNKNOWN", it means the probe has happened(this is possible for example, hotplug off then hotplug on one cpu), and the percpu var has been set, don't probe again in this case. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Fixes: 584ea656 ("RISC-V: Probe for unaligned access speed") Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20230912154040.3306-1-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Jinyu Tang authored
If these config not set, mmc can't run for jh7110, rootfs can't be found when using SD card. So set CONFIG_MMC_DW=y like arm64 defconfig, and set CONFIG_MMC_DW_STARFIVE=y for starfive. Then starfive vf2 board can start SD card rootfs with mainline defconfig and dtb. Signed-off-by: Jinyu Tang <tangjinyu@tinylab.org> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20230912133128.5247-1-tangjinyu@tinylab.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Haorong Lu authored
In the current riscv implementation, blocking syscalls like read() may not correctly restart after being interrupted by ptrace. This problem arises when the syscall restart process in arch_do_signal_or_restart() is bypassed due to changes to the regs->cause register, such as an ebreak instruction. Steps to reproduce: 1. Interrupt the tracee process with PTRACE_SEIZE & PTRACE_INTERRUPT. 2. Backup original registers and instruction at new_pc. 3. Change pc to new_pc, and inject an instruction (like ebreak) to this address. 4. Resume with PTRACE_CONT and wait for the process to stop again after executing ebreak. 5. Restore original registers and instructions, and detach from the tracee process. 6. Now the read() syscall in tracee will return -1 with errno set to ERESTARTSYS. Specifically, during an interrupt, the regs->cause changes from EXC_SYSCALL to EXC_BREAKPOINT due to the injected ebreak, which is inaccessible via ptrace so we cannot restore it. This alteration breaks the syscall restart condition and ends the read() syscall with an ERESTARTSYS error. According to include/linux/errno.h, it should never be seen by user programs. X86 can avoid this issue as it checks the syscall condition using a register (orig_ax) exposed to user space. Arm64 handles syscall restart before calling get_signal, where it could be paused and inspected by ptrace/debugger. This patch adjusts the riscv implementation to arm64 style, which also checks syscall using a kernel register (syscallno). It ensures the syscall restart process is not bypassed when changes to the cause register occur, providing more consistent behavior across various architectures. For a simplified reproduction program, feel free to visit: https://github.com/ancientmodern/riscv-ptrace-bug-demo. Signed-off-by: Haorong Lu <ancientmodern4@gmail.com> Link: https://lore.kernel.org/r/20230803224458.4156006-1-ancientmodern4@gmail.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Clément Léger <cleger@rivosinc.com> says: Since commit 61cadb9 ("Provide new description of misaligned load/store behavior compatible with privileged architecture.") in the RISC-V ISA manual, it is stated that misaligned load/store might not be supported. However, the RISC-V kernel uABI describes that misaligned accesses are supported. In order to support that, this series adds support for S-mode handling of misaligned accesses as well support for prctl(PR_UNALIGN). Handling misaligned access in kernel allows for a finer grain control of the misaligned accesses behavior, and thanks to the prctl() call, can allow disabling misaligned access emulation to generate SIGBUS. User space can then optimize its software by removing such access based on SIGBUS generation. This series is useful when using a SBI implementation that does not handle misaligned traps as well as detecting misaligned accesses generated by userspace application using the prctrl(PR_SET_UNALIGN) feature. This series can be tested using the spike simulator[1] and a modified openSBI version[2] which allows to always delegate misaligned load/store to S-mode. A test[3] that exercise various instructions/registers can be executed to verify the unaligned access support. [1] https://github.com/riscv-software-src/riscv-isa-sim [2] https://github.com/rivosinc/opensbi/tree/dev/cleger/no_misaligned [3] https://github.com/clementleger/unaligned_test * b4-shazam-merge: riscv: add support for PR_SET_UNALIGN and PR_GET_UNALIGN riscv: report misaligned accesses emulation to hwprobe riscv: annotate check_unaligned_access_boot_cpu() with __init riscv: add support for sysctl unaligned_enabled control riscv: add floating point insn support to misaligned access emulation riscv: report perf event for misaligned fault riscv: add support for misaligned trap handling in S-mode riscv: remove unused functions in traps_misaligned.c Link: https://lore.kernel.org/r/20231004151405.521596-1-cleger@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
- 02 Nov, 2023 1 commit
-
-
Andrew Jones authored
A hwprobe pair key is signed, but the hwprobe vDSO function was only checking that the upper bound was valid. In order to help avoid this type of problem in the future, and in anticipation of this check becoming more complicated with sparse keys, introduce and use a "key is valid" predicate function for the check. Fixes: aa5af0aa ("RISC-V: Add hwprobe vDSO function and data") Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Evan Green <evan@rivosinc.com> Link: https://lore.kernel.org/r/20231010165101.14942-2-ajones@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-