- 29 Apr, 2019 19 commits
-
-
Sebastian Ott authored
Up until now all interrupts on s390 have been floating. For MSI interrupts we've used a global summary bit vector (with a bit for each function) and a per-function interrupt bit vector (with a bit per MSI). This patch introduces a new IRQ delivery mode: CPU directed interrupts. In this new mode a per-CPU interrupt bit vector is used (with a bit per MSI per function). Further it is now possible to direct an IRQ to a specific CPU so we can finally support IRQ affinity. If an interrupt can't be delivered because the appointed CPU is occupied by a hypervisor the interrupt is delivered floating. For this a global summary bit vector is used (with a bit per CPU). Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Provide the ability to create cachesize aligned interrupt vectors. These will be used for per-CPU interrupt vectors. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Rename and clarify the usage of the interrupt bit vectors. Also change the array of the per-function bit vectors to be dynamically allocated. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Add an extra parameter for airq handlers to recognize floating vs. directed interrupts. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Detect the adapter CPU directed interruption facility. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Move everything interrupt related from pci.c to pci_irq.c. No functional change. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
No users of pr_debug in that file. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
No point to keep that around. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
Provide an interface for userspace so it can find out if a machine is capeable of doing secure boot. The interface is, for example, needed for zipl so it can find out which file format it can/should write to disk. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
A kernel loaded via kexec_load cannot be verified. Thus disable kexec_load systemcall in kernels which where IPLed securely. Use the IMA mechanism to do so. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
Add kernel signature verification to kexec_file. The verification is based on module signature verification and works with kernel images signed via scripts/sign-file. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
The leading 64 kB of a kernel image doesn't contain any data needed to boot the new kernel when it was loaded via kexec_file. Thus kexec_file currently strips them off before loading the image. Keep the leading 64 kB in order to be able to pass a ipl_report to the next kernel. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
s390_image_load and s390_elf_load have the same code to load the different components. Combine this functionality in one shared function. While at it move kexec_file_update_kernel into the new function as well. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
Access the parmarea in head.S via a struct instead of individual offsets. While at it make the fields in the parmarea .quads. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
Omit use of script/bin2c hack. Directly include into assembler file instead. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
The purgatory is compiled into the vmlinux and keept in memory all the time during runtime. Thus any section not needed to load the purgatory unnecessarily bloats up its foot print in file- and memorysize. Reduce the purgatory size by stripping the unneeded sections from the purgatory. This reduces the purgatories size by ~33%. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
To register data for the next kernel (command line, oldmem_base, etc.) the current kernel needs to find the ELF segment that contains head.S. This is currently done by checking ifor 'phdr->p_paddr == 0'. This works fine for the current kernel build but in theory the first few pages could be skipped. Make the detection more robust by checking if the entry point lies within the segment. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Rudo authored
When loading an ELF image via kexec_file the segment alignment is ignored in the calculation for the load address of the next segment. When there are multiple segments this can lead to segment overlap and thus load failure. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Fixes: 8be01882 ("s390/kexec_file: Add ELF loader") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 Apr, 2019 7 commits
-
-
Philipp Rudo authored
With git commit 1e941d39 "s390: move ipl block to .boot.preserved.data section" the earl_ipl_block got renamed to ipl_block and became publicly available via boot_data.h. This might cause problems with zcore, which has it's own ipl_block variable. Thus rename the ipl_block in zcore to prevent name collision and highlight that it's only used locally. Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Fixes: 1e941d39 ("s390: move ipl block to .boot.preserved.data section") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
In order to be able to sign the bzImage independent of the block size of the IPL device, align the bzImage to 4096 bytes. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
PR: Adjusted to the use in kexec_file later. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Read the IPL Report block provided by secure-boot, add the entries of the certificate list to the system key ring and print the list of components. PR: Adjust to Vasilys bootdata_preserved patch set. Preserve ipl_cert_list for later use in kexec_file. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
To transport the information required for secure boot a new IPL report will be created at boot time. It will be written to memory right after the IPL parameter block. To work with the IPL report a couple of additional structure definitions are added the the uapi/ipl.h header. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The IPL parameter block is used as an interface between Linux and the machine to query and change the boot device and boot options. To be able to create IPL parameter block in user space and pass it as segment to kexec provide an uapi header with proper structure definitions for the block. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The ipl_info union in struct ipl_parameter_block has the same name as the struct ipl_info. This does not help while reading the code and the union in struct ipl_parameter_block does not need to be named. Drop the name from the union. Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 Apr, 2019 4 commits
-
-
Martin Schwidefsky authored
Add hardware capability bits and features tags to /proc/cpuinfo for 4 new CPU features: "Vector-Enhancements Facility 2" (tag "vxe2", hwcap 2^15) "Vector-Packed-Decimal-Enhancement Facility" (tag "vxp", hwcap 2^16) "Enhanced-Sort Facility" (tag "sort", hwcap 2^17) "Deflate-Conversion Facility" (tag "dflt", hwcap 2^18) Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Harald Freudenberger authored
With the z14 machine there came also a CPACF hardware extension which provides a True Random Number Generator. This TRNG can be accessed with a new subfunction code within the CPACF prno instruction and provides random data with very high entropy. So if there is a TRNG available, let's use it for initial seeding and reseeding instead of the current implementation which tries to generate entropy based on stckf (store clock fast) jitters. For details about the amount of data needed and pulled for seeding and reseeding there can be explaining comments in the code found. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Harald Freudenberger authored
Here is a rework of the generate_entropy function of the pseudo random device driver exploiting the prno CPACF instruction. George Spelvin pointed out some issues with the existing implementation. One point was, that the buffer used to store the stckf values is 2 pages which are initially filled with get_random_bytes() for each 64 byte junk produced by the function. Another point was that the stckf values only carry entropy in the LSB and thus a buffer of 2 pages is not really needed. Then there was a comment about the use of the kimd cpacf function without proper initialization. The rework addresses these points and now one page is used and only one half of this is filled with get_random_bytes() on each chunk of 64 bytes requested data. The other half of the page is filled with stckf values exored into with an overlap of 4 bytes. This can be done due to the fact that only the lower 4 bytes carry entropy we need. For more details about the algorithm used, see the header of the function. The generate_entropy() function now uses the cpacf function klmd with proper initialization of the parameter block to perform the sha512 hash. George also pointed out some issues with the internal buffers used for seeding and reads. These buffers are now zeroed with memzero_implicit after use. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reported-by: George Spelvin <lkml@sdf.org> Suggested-by: George Spelvin <lkml@sdf.org> Reviewed-by: Patrick Steuer <steuer@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Merge tag 'vfio-ccw-20190425' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw into features Pull vfio-ccw from Cornelia Huck with the following changes: - support for sending halt/clear requests to the device - various bug fixes
-
- 24 Apr, 2019 10 commits
-
-
Farhan Ali authored
The quiesce function calls cio_cancel_halt_clear() and if we get an -EBUSY we go into a loop where we: - wait for any interrupts - flush all I/O in the workqueue - retry cio_cancel_halt_clear During the period where we are waiting for interrupts or flushing all I/O, the channel subsystem could have completed a halt/clear action and turned off the corresponding activity control bits in the subchannel status word. This means the next time we call cio_cancel_halt_clear(), we will again start by calling cancel subchannel and so we can be stuck between calling cancel and halt forever. Rather than calling cio_cancel_halt_clear() immediately after waiting, let's try to disable the subchannel. If we succeed in disabling the subchannel then we know nothing else can happen with the device. Suggested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Message-Id: <4d5a4b98ab1b41ac6131b5c36de18b76c5d66898.1555449329.git.alifm@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Farhan Ali authored
When releasing the vfio-ccw mdev, we currently do not release any existing channel program and its pinned pages. This can lead to the following warning: [1038876.561565] WARNING: CPU: 2 PID: 144727 at drivers/vfio/vfio_iommu_type1.c:1494 vfio_sanity_check_pfn_list+0x40/0x70 [vfio_iommu_type1] .... 1038876.561921] Call Trace: [1038876.561935] ([<00000009897fb870>] 0x9897fb870) [1038876.561949] [<000003ff8013bf62>] vfio_iommu_type1_detach_group+0xda/0x2f0 [vfio_iommu_type1] [1038876.561965] [<000003ff8007b634>] __vfio_group_unset_container+0x64/0x190 [vfio] [1038876.561978] [<000003ff8007b87e>] vfio_group_put_external_user+0x26/0x38 [vfio] [1038876.562024] [<000003ff806fc608>] kvm_vfio_group_put_external_user+0x40/0x60 [kvm] [1038876.562045] [<000003ff806fcb9e>] kvm_vfio_destroy+0x5e/0xd0 [kvm] [1038876.562065] [<000003ff806f63fc>] kvm_put_kvm+0x2a4/0x3d0 [kvm] [1038876.562083] [<000003ff806f655e>] kvm_vm_release+0x36/0x48 [kvm] [1038876.562098] [<00000000003c2dc4>] __fput+0x144/0x228 [1038876.562113] [<000000000016ee82>] task_work_run+0x8a/0xd8 [1038876.562125] [<000000000014c7a8>] do_exit+0x5d8/0xd90 [1038876.562140] [<000000000014d084>] do_group_exit+0xc4/0xc8 [1038876.562155] [<000000000015c046>] get_signal+0x9ae/0xa68 [1038876.562169] [<0000000000108d66>] do_signal+0x66/0x768 [1038876.562185] [<0000000000b9e37e>] system_call+0x1ea/0x2d8 [1038876.562195] 2 locks held by qemu-system-s39/144727: [1038876.562205] #0: 00000000537abaf9 (&container->group_lock){++++}, at: __vfio_group_unset_container+0x3c/0x190 [vfio] [1038876.562230] #1: 00000000670008b5 (&iommu->lock){+.+.}, at: vfio_iommu_type1_detach_group+0x36/0x2f0 [vfio_iommu_type1] [1038876.562250] Last Breaking-Event-Address: [1038876.562262] [<000003ff8013aa24>] vfio_sanity_check_pfn_list+0x3c/0x70 [vfio_iommu_type1] [1038876.562272] irq event stamp: 4236481 [1038876.562287] hardirqs last enabled at (4236489): [<00000000001cee7a>] console_unlock+0x6d2/0x740 [1038876.562299] hardirqs last disabled at (4236496): [<00000000001ce87e>] console_unlock+0xd6/0x740 [1038876.562311] softirqs last enabled at (4234162): [<0000000000b9fa1e>] __do_softirq+0x556/0x598 [1038876.562325] softirqs last disabled at (4234153): [<000000000014e4cc>] irq_exit+0xac/0x108 [1038876.562337] ---[ end trace 6c96d467b1c3ca06 ]--- Similarly we do not free the channel program when we are removing the vfio-ccw device. Let's fix this by resetting the device and freeing the channel program and pinned pages in the release path. For the remove path we can just quiesce the device, since in the remove path the mediated device is going away for good and so we don't need to do a full reset. Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Message-Id: <ae9f20dc8873f2027f7b3c5d2aaa0bdfe06850b8.1554756534.git.alifm@linux.ibm.com> Acked-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Farhan Ali authored
Currently we call flush_workqueue while holding the subchannel spinlock. But flush_workqueue function can go to sleep, so do not call the function while holding the spinlock. Fixes the following bug: [ 285.203430] BUG: scheduling while atomic: bash/14193/0x00000002 [ 285.203434] INFO: lockdep is turned off. .... [ 285.203485] Preemption disabled at: [ 285.203488] [<000003ff80243e5c>] vfio_ccw_sch_quiesce+0xbc/0x120 [vfio_ccw] [ 285.203496] CPU: 7 PID: 14193 Comm: bash Tainted: G W .... [ 285.203504] Call Trace: [ 285.203510] ([<0000000000113772>] show_stack+0x82/0xd0) [ 285.203514] [<0000000000b7a102>] dump_stack+0x92/0xd0 [ 285.203518] [<000000000017b8be>] __schedule_bug+0xde/0xf8 [ 285.203524] [<0000000000b95b5a>] __schedule+0x7a/0xc38 [ 285.203528] [<0000000000b9678a>] schedule+0x72/0xb0 [ 285.203533] [<0000000000b9bfbc>] schedule_timeout+0x34/0x528 [ 285.203538] [<0000000000b97608>] wait_for_common+0x118/0x1b0 [ 285.203544] [<0000000000166d6a>] flush_workqueue+0x182/0x548 [ 285.203550] [<000003ff80243e6e>] vfio_ccw_sch_quiesce+0xce/0x120 [vfio_ccw] [ 285.203556] [<000003ff80245278>] vfio_ccw_mdev_reset+0x38/0x70 [vfio_ccw] [ 285.203562] [<000003ff802458b0>] vfio_ccw_mdev_remove+0x40/0x78 [vfio_ccw] [ 285.203567] [<000003ff801a499c>] mdev_device_remove_ops+0x3c/0x80 [mdev] [ 285.203573] [<000003ff801a4d5c>] mdev_device_remove+0xc4/0x130 [mdev] [ 285.203578] [<000003ff801a5074>] remove_store+0x6c/0xa8 [mdev] [ 285.203582] [<000000000046f494>] kernfs_fop_write+0x14c/0x1f8 [ 285.203588] [<00000000003c1530>] __vfs_write+0x38/0x1a8 [ 285.203593] [<00000000003c187c>] vfs_write+0xb4/0x198 [ 285.203597] [<00000000003c1af2>] ksys_write+0x5a/0xb0 [ 285.203601] [<0000000000b9e270>] system_call+0xdc/0x2d8 Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Message-Id: <626bab8bb2958ae132452e1ddaf1b20882ad5a9d.1554756534.git.alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Cornelia Huck authored
Add a region to the vfio-ccw device that can be used to submit asynchronous I/O instructions. ssch continues to be handled by the existing I/O region; the new region handles hsch and csch. Interrupt status continues to be reported through the same channels as for ssch. Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Cornelia Huck authored
The vfio-ccw code will need this, and it matches treatment of ssch and csch. Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Cornelia Huck authored
Allow to extend the regions used by vfio-ccw. The first user will be handling of halt and clear subchannel. Reviewed-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Cornelia Huck authored
Introduce a mutex to disallow concurrent reads or writes to the I/O region. This makes sure that the data the kernel or user space see is always consistent. The same mutex will be used to protect the async region as well. Reviewed-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Cornelia Huck authored
The flow for processing ssch requests can be improved by splitting the BUSY state: - CP_PROCESSING: We reject any user space requests while we are in the process of translating a channel program and submitting it to the hardware. Use -EAGAIN to signal user space that it should retry the request. - CP_PENDING: We have successfully submitted a request with ssch and are now expecting an interrupt. As we can't handle more than one channel program being processed, reject any further requests with -EBUSY. A final interrupt will move us out of this state. By making this a separate state, we make it possible to issue a halt or a clear while we're still waiting for the final interrupt for the ssch (in a follow-on patch). It also makes a lot of sense not to preemptively filter out writes to the io_region if we're in an incorrect state: the state machine will handle this correctly. Reviewed-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Cornelia Huck authored
When we get a solicited interrupt, the start function may have been cleared by a csch, but we still have a channel program structure allocated. Make it safe to call the cp accessors in any case, so we can call them unconditionally. While at it, also make sure that functions called from other parts of the code return gracefully if the channel program structure has not been initialized (even though that is a bug in the caller). Reviewed-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Martin Schwidefsky authored
With git commit d1874a0c "s390/mm: make the pxd_offset functions more robust" and a 2-level page table it can now happen that pgd_bad() gets asked to verify a large segment table entry. If the entry is marked as dirty pgd_bad() will incorrectly return true. Change the pgd_bad(), p4d_bad(), pud_bad() and pmd_bad() functions to first verify the table type, return false if the table level is lower than what the function is suppossed to check, return true if the table level is too high, and otherwise check the relevant region and segment table bits. pmd_bad() has to check against ~SEGMENT_ENTRY_BITS for normal page table pointers or ~SEGMENT_ENTRY_BITS_LARGE for large segment table entries. Same for pud_bad() which has to check against ~_REGION_ENTRY_BITS or ~_REGION_ENTRY_BITS_LARGE. Fixes: d1874a0c ("s390/mm: make the pxd_offset functions more robust") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-