- 06 May, 2019 1 commit
-
-
Jiri Kosina authored
-
- 03 May, 2019 2 commits
-
-
Petr Mladek authored
kobject_init() call added one more operation that has to be done when doing the early initialization of both static and dynamic livepatch structures. It would have been easier when the early initialization code was not duplicated. Let's deduplicate it for future generations of livepatching hackers. The patch does not change the existing behavior. Signed-off-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
kobject_init() always succeeds and sets the reference count to 1. It allows to always free the structures via kobject_put() and the related release callback. Note that the custom kobject state handling was used only because we did not know that kobject_put() can and actually should get called even when kobject_init_and_add() fails. The patch should not change the existing behavior. Suggested-by: "Tobin C. Harding" <tobin@kernel.org> Signed-off-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
- 29 Apr, 2019 1 commit
-
-
Petr Mladek authored
The commit d0807da7 ("livepatch: Remove immediate feature") caused that any livepatch was refused when reliable stacktraces were not supported on the given architecture. The limitation is too strong. User space processes are safely migrated even when entering or leaving the kernel. Kthreads transition would need to get forced. But it is safe when: + The livepatch does not change the semantic of the code. + Callbacks do not depend on a safely finished transition. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 15 Apr, 2019 1 commit
-
-
Miroslav Benes authored
Add functions.sh to TEST_PROGS_EXTENDED so that it is installed along with the rest of the selftests and they can be run. Originally-by: Shuah Khan <shuah@kernel.org> Signed-off-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 08 Apr, 2019 1 commit
-
-
Miroslav Benes authored
GCC 9 introduces a new option, -flive-patching. It disables certain optimizations which could make a compilation unsafe for later live patching of the running kernel. The option is used only if CONFIG_LIVEPATCH is enabled and $(CC) supports it. Performance impact of the option was measured on three different Intel machines - two bigger NUMA boxes and one smaller UMA box. Kernel intensive (IO, scheduling, networking) benchmarks were selected, plus a set of HPC workloads from NAS Parallel Benchmark. The tests were done on upstream kernel 5.0-rc8 with openSUSE Leap 15.0 userspace. The majority of the tests is unaffected. The only significant exception is the scheduler section which suffers 1-3% degradation. Evaluated-by: Giovanni Gherdovich <ggherdovich@suse.cz> Signed-off-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 27 Mar, 2019 1 commit
-
-
Joe Lawrence authored
Adrian reports that 'make -C tools clean' results in removal of the livepatch selftest shell scripts. As per the selftest lib.mk file, TEST_PROGS are for test shell scripts, not TEST_GEN_PROGS. Adjust the livepatch selftest Makefile accordingly. Reported-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Tested-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 05 Mar, 2019 3 commits
-
-
Jiri Kosina authored
The atomic replace allows to create cumulative patches. They are useful when you maintain many livepatches and want to remove one that is lower on the stack. In addition it is very useful when more patches touch the same function and there are dependencies between them. It's also a feature some of the distros are using already to distribute their patches.
-
Jiri Kosina authored
Ability to send fake signal to blocking tasks automatically, instead of requiring manual intervention, from Miroslav Benes
-
Jiri Kosina authored
Document change towards group maintainership of livepatching code samples/ warning fix from Nicholas Mc Guire
-
- 12 Feb, 2019 1 commit
-
-
Joe Lawrence authored
The livepatch selftest functions.sh library uses "$*" and an intermediate variable to extract and then pass arguments from function to function call. The effect of this combination is that the argument list is flattened into a single argument. Sometimes this is benign, but in cases like __load_mod(), the modprobe invocation will interpret all the module parameters as a single parameter. Drop the intermediate variable and use the "$@" special parameter as described in the bash manual. Link: https://www.gnu.org/software/bash/manual/bash.html#Special-ParametersSigned-off-by: Joe Lawrence <joe.lawrence@redhat.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 06 Feb, 2019 5 commits
-
-
Petr Mladek authored
Livepatches can no longer get enabled and disabled repeatedly. The list klp_patches contains only enabled patches and eventually the patch in transition. The module coming and going callbacks do no longer need to check for these state. They have to proceed with all listed patches. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
Petr Mladek authored
Add proper error handling when allocating or getting shadow variables in the selftest. It prevents an invalid pointer access in some situations. It shows the good programming practice in the others. The error codes are just the best guess and specific for this particular test. In general, klp_shadow_alloc() returns NULL also when the given shadow variable has already been allocated. In addition, both klp_shadow_alloc() and klp_shadow_get_or_alloc() might fail from other reasons when the constructor fails. Note, that the error code is not really important even in the real life. The use of shadow variables should be transparent for the original livepatched code. Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
Joe Lawrence authored
Fixes the following smatch warning: lib/livepatch/test_klp_shadow_vars.c:47 ptr_id() warn: returning -1 instead of -ENOMEM is sloppy Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
Petr Mladek authored
There are already macros to iterate over struct klp_func and klp_object. Add also klp_for_each_patch(). But make it internal because also klp_patches list is internal. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
Alice Ferrazzi authored
As a result of an unsupported operation is better to use EOPNOTSUPP as error code. ENOSYS is only used for 'invalid syscall nr' and nothing else. Signed-off-by: Alice Ferrazzi <alice.ferrazzi@miraclelinux.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 01 Feb, 2019 1 commit
-
-
Joe Lawrence authored
The livepatch selftest scripts turn on dynamic_debug of livepatch kernel source to determine expected behavior. TEST_LIVEPATCH should therefore include DYNAMIC_DEBUG in its list of dependencies. Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Petr Mladek <pmladek@suse.com>
-
- 25 Jan, 2019 1 commit
-
-
Nicholas Mc Guire authored
Sparse reported warnings about non-static symbols. For the variables a simple static attribute is fine - for the functions referenced by livepatch via klp_func the symbol-names must be unmodified in the symbol table and the patchable code has to be emitted. The resolution is to attach __used attribute to the shared statically declared functions. Link: https://lore.kernel.org/lkml/1544965657-26804-1-git-send-email-hofrat@osadl.org/Suggested-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
- 24 Jan, 2019 1 commit
-
-
Jiri Kosina authored
Update MAINTAINERS for livepatching to reflect status quo better. Also move the tree to a shared location, as we are moving more towards group maintainership model. Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Jessica Yu <jeyu@kernel.org> Acked-by: Petr Mladek <pmladek@suse.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
- 16 Jan, 2019 2 commits
-
-
Miroslav Benes authored
The fake signal is send automatically now. We can rely on it completely and remove the sysfs attribute. Signed-off-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Miroslav Benes authored
An administrator may send a fake signal to all remaining blocking tasks of a running transition by writing to /sys/kernel/livepatch/<patch>/signal attribute. Let's do it automatically after 15 seconds. The timeout is chosen deliberately. It gives the tasks enough time to transition themselves. Theoretically, sending it once should be more than enough. However, every task must get outside of a patched function to be successfully transitioned. It could prove not to be simple and resending could be helpful in that case. A new workqueue job could be a cleaner solution to achieve it, but it could also introduce deadlocks and cause more headaches with synchronization and cancelling. [jkosina@suse.cz: removed added newline] Signed-off-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
- 11 Jan, 2019 11 commits
-
-
Joe Lawrence authored
Add a few livepatch modules and simple target modules that the included regression suite can run tests against: - basic livepatching (multiple patches, atomic replace) - pre/post (un)patch callbacks - shadow variable API Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Tested-by: Miroslav Benes <mbenes@suse.cz> Tested-by: Alice Ferrazzi <alice.ferrazzi@gmail.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
The atomic replace and cumulative patches were introduced as a more secure way to handle dependent patches. They simplify the logic: + Any new cumulative patch is supposed to take over shadow variables and changes made by callbacks from previous livepatches. + All replaced patches are discarded and the modules can be unloaded. As a result, there is only one scenario when a cumulative livepatch gets disabled. The different handling of "normal" and cumulative patches might cause confusion. It would make sense to keep only one mode. On the other hand, it would be rude to enforce using the cumulative livepatches even for trivial and independent (hot) fixes. However, the stack of patches is not really necessary any longer. The patch ordering was never clearly visible via the sysfs interface. Also the "normal" patches need a lot of caution anyway. Note that the list of enabled patches is still necessary but the ordering is not longer enforced. Otherwise, the code is ready to disable livepatches in an random order. Namely, klp_check_stack_func() always looks for the function from the livepatch that is being disabled. klp_func structures are just removed from the related func_stack. Finally, the ftrace handlers is removed only when the func_stack becomes empty. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
User documentation for the atomic replace feature. It makes it easier to maintain livepatches using so-called cumulative patches. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
Replaced patches are removed from the stack when the transition is finished. It means that Nop structures will never be needed again and can be removed. Why should we care? + Nop structures give the impression that the function is patched even though the ftrace handler has no effect. + Ftrace handlers do not come for free. They cause slowdown that might be visible in some workloads. The ftrace-related slowdown might actually be the reason why the function is no longer patched in the new cumulative patch. One would expect that cumulative patch would help solve these problems as well. + Cumulative patches are supposed to replace any earlier version of the patch. The amount of NOPs depends on which version was replaced. This multiplies the amount of scenarios that might happen. One might say that NOPs are innocent. But there are even optimized NOP instructions for different processors, for example, see arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much more complicated. + It sounds natural to clean up a mess that is no longer needed. It could only be worse if we do not do it. This patch allows to unpatch and free the dynamic structures independently when the transition finishes. The free part is a bit tricky because kobject free callbacks are called asynchronously. We could not wait for them easily. Fortunately, we do not have to. Any further access can be avoided by removing them from the dynamic lists. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Jason Baron authored
Sometimes we would like to revert a particular fix. Currently, this is not easy because we want to keep all other fixes active and we could revert only the last applied patch. One solution would be to apply new patch that implemented all the reverted functions like in the original code. It would work as expected but there will be unnecessary redirections. In addition, it would also require knowing which functions need to be reverted at build time. Another problem is when there are many patches that touch the same functions. There might be dependencies between patches that are not enforced on the kernel side. Also it might be pretty hard to actually prepare the patch and ensure compatibility with the other patches. Atomic replace && cumulative patches: A better solution would be to create cumulative patch and say that it replaces all older ones. This patch adds a new "replace" flag to struct klp_patch. When it is enabled, a set of 'nop' klp_func will be dynamically created for all functions that are already being patched but that will no longer be modified by the new patch. They are used as a new target during the patch transition. The idea is to handle Nops' structures like the static ones. When the dynamic structures are allocated, we initialize all values that are normally statically defined. The only exception is "new_func" in struct klp_func. It has to point to the original function and the address is known only when the object (module) is loaded. Note that we really need to set it. The address is used, for example, in klp_check_stack_func(). Nevertheless we still need to distinguish the dynamically allocated structures in some operations. For this, we add "nop" flag into struct klp_func and "dynamic" flag into struct klp_object. They need special handling in the following situations: + The structures are added into the lists of objects and functions immediately. In fact, the lists were created for this purpose. + The address of the original function is known only when the patched object (module) is loaded. Therefore it is copied later in klp_init_object_loaded(). + The ftrace handler must not set PC to func->new_func. It would cause infinite loop because the address points back to the beginning of the original function. + The various free() functions must free the structure itself. Note that other ways to detect the dynamic structures are not considered safe. For example, even the statically defined struct klp_object might include empty funcs array. It might be there just to run some callbacks. Also note that the safe iterator must be used in the free() functions. Otherwise already freed structures might get accessed. Special callbacks handling: The callbacks from the replaced patches are _not_ called by intention. It would be pretty hard to define a reasonable semantic and implement it. It might even be counter-productive. The new patch is cumulative. It is supposed to include most of the changes from older patches. In most cases, it will not want to call pre_unpatch() post_unpatch() callbacks from the replaced patches. It would disable/break things for no good reasons. Also it should be easier to handle various scenarios in a single script in the new patch than think about interactions caused by running many scripts from older patches. Not to say that the old scripts even would not expect to be called in this situation. Removing replaced patches: One nice effect of the cumulative patches is that the code from the older patches is no longer used. Therefore the replaced patches can be removed. It has several advantages: + Nops' structs will no longer be necessary and might be removed. This would save memory, restore performance (no ftrace handler), allow clear view on what is really patched. + Disabling the patch will cause using the original code everywhere. Therefore the livepatch callbacks could handle only one scenario. Note that the complication is already complex enough when the patch gets enabled. It is currently solved by calling callbacks only from the new cumulative patch. + The state is clean in both the sysfs interface and lsmod. The modules with the replaced livepatches might even get removed from the system. Some people actually expected this behavior from the beginning. After all a cumulative patch is supposed to "completely" replace an existing one. It is like when a new version of an application replaces an older one. This patch does the first step. It removes the replaced patches from the list of patches. It is safe. The consistency model ensures that they are no longer used. By other words, each process works only with the structures from klp_transition_patch. The removal is done by a special function. It combines actions done by __disable_patch() and klp_complete_transition(). But it is a fast track without all the transaction-related stuff. Signed-off-by: Jason Baron <jbaron@akamai.com> [pmladek@suse.com: Split, reuse existing code, simplified] Signed-off-by: Petr Mladek <pmladek@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Jason Baron authored
Currently klp_patch contains a pointer to a statically allocated array of struct klp_object and struct klp_objects contains a pointer to a statically allocated array of klp_func. In order to allow for the dynamic allocation of objects and functions, link klp_patch, klp_object, and klp_func together via linked lists. This allows us to more easily allocate new objects and functions, while having the iterator be a simple linked list walk. The static structures are added to the lists early. It allows to add the dynamically allocated objects before klp_init_object() and klp_init_func() calls. Therefore it reduces the further changes to the code. This patch does not change the existing behavior. Signed-off-by: Jason Baron <jbaron@akamai.com> [pmladek@suse.com: Initialize lists before init calls] Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
The possibility to re-enable a registered patch was useful for immediate patches where the livepatch module had to stay until the system reboot. The improved consistency model allows to achieve the same result by unloading and loading the livepatch module again. Also we are going to add a feature called atomic replace. It will allow to create a patch that would replace all already registered patches. The aim is to handle dependent patches more securely. It will obsolete the stack of patches that helped to handle the dependencies so far. Then it might be unclear when a cumulative patch re-enabling is safe. It would be complicated to support the many modes. Instead we could actually make the API and code easier to understand. Therefore, remove the two step public API. All the checks and init calls are moved from klp_register_patch() to klp_enabled_patch(). Also the patch is automatically freed, including the sysfs interface when the transition to the disabled state is completed. As a result, there is never a disabled patch on the top of the stack. Therefore we do not need to check the stack in __klp_enable_patch(). And we could simplify the check in __klp_disable_patch(). Also the API and logic is much easier. It is enough to call klp_enable_patch() in module_init() call. The patch can be disabled by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module can be removed once the transition finishes and sysfs interface is freed. The only problem is how to free the structures and kobjects safely. The operation is triggered from the sysfs interface. We could not put the related kobject from there because it would cause lock inversion between klp_mutex and kernfs locks, see kn->count lockdep map. Therefore, offload the free task to a workqueue. It is perfectly fine: + The patch can no longer be used in the livepatch operations. + The module could not be removed until the free operation finishes and module_put() is called. + The operation is asynchronous already when the first klp_try_complete_transition() fails and another call is queued with a delay. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
module_put() is currently never called in klp_complete_transition() when klp_force is set. As a result, we might keep the reference count even when klp_enable_patch() fails and klp_cancel_transition() is called. This might give the impression that a module might get blocked in some strange init state. Fortunately, it is not the case. The reference count is ignored when mod->init fails and erroneous modules are always removed. Anyway, this might be confusing. Instead, this patch moves the global klp_forced flag into struct klp_patch. As a result, we block only modules that might still be in use after a forced transition. Newly loaded livepatches might be eventually completely removed later. It is not a big deal. But the code is at least consistent with the reality. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
The code for freeing livepatch structures is a bit scattered and tricky: + direct calls to klp_free_*_limited() and kobject_put() are used to release partially initialized objects + klp_free_patch() removes the patch from the public list and releases all objects except for patch->kobj + object_put(&patch->kobj) and the related wait_for_completion() are called directly outside klp_mutex; this code is duplicated; Now, we are going to remove the registration stage to simplify the API and the code. This would require handling more situations in klp_enable_patch() error paths. More importantly, we are going to add a feature called atomic replace. It will need to dynamically create func and object structures. We will want to reuse the existing init() and free() functions. This would create even more error path scenarios. This patch implements more straightforward free functions: + checks kobj_added flag instead of @limit[*] + initializes patch->list early so that the check for empty list always works + The action(s) that has to be done outside klp_mutex are done in separate klp_free_patch_finish() function. It waits only when patch->kobj was really released via the _start() part. The patch does not change the existing behavior. [*] We need our own flag to track that the kobject was successfully added to the hierarchy. Note that kobj.state_initialized only indicates that kobject has been initialized, not whether is has been added (and needs to be removed on cleanup). Signed-off-by: Petr Mladek <pmladek@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Jessica Yu <jeyu@kernel.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Jason Baron <jbaron@akamai.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
We are going to simplify the API and code by removing the registration step. This would require calling init/free functions from enable/disable ones. This patch just moves the code to prevent more forward declarations. This patch does not change the code except for two forward declarations. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
Petr Mladek authored
The address of the to be patched function and new function is stored in struct klp_func as: void *new_func; unsigned long old_addr; The different naming scheme and type are derived from the way the addresses are set. @old_addr is assigned at runtime using kallsyms-based search. @new_func is statically initialized, for example: static struct klp_func funcs[] = { { .old_name = "cmdline_proc_show", .new_func = livepatch_cmdline_proc_show, }, { } }; This patch changes unsigned long old_addr -> void *old_func. It removes some confusion when these address are later used in the code. It is motivated by a followup patch that adds special NOP struct klp_func where we want to assign func->new_func = func->old_addr respectively func->new_func = func->old_func. This patch does not modify the existing behavior. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Alice Ferrazzi <alice.ferrazzi@gmail.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-
- 06 Jan, 2019 3 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatchingLinus Torvalds authored
Pull livepatch update from Jiri Kosina: "Return value checking fixup in livepatching samples, from Nicholas Mc Guire" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching: livepatch: check kzalloc return values
-
git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linuxLinus Torvalds authored
Pull thermal management updates from Zhang Rui: - Add locking for cooling device sysfs attribute in case the cooling device state is changed by userspace and thermal framework simultaneously. (Thara Gopinath) - Fix a problem that passive cooling is reset improperly after system suspend/resume. (Wei Wang) - Cleanup the driver/thermal/ directory by moving intel and qcom platform specific drivers to platform specific sub-directories. (Amit Kucheria) - Some trivial cleanups. (Lukasz Luba, Wolfram Sang) * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux: thermal/intel: fixup for Kconfig string parsing tightening up drivers: thermal: Move QCOM_SPMI_TEMP_ALARM into the qcom subdir drivers: thermal: Move various drivers for intel platforms into a subdir thermal: Fix locking in cooling device sysfs update cur_state Thermal: do not clear passive state during system sleep thermal: zx2967_thermal: simplify getting .driver_data thermal: st: st_thermal: simplify getting .driver_data thermal: spear_thermal: simplify getting .driver_data thermal: rockchip_thermal: simplify getting .driver_data thermal: int340x_thermal: int3400_thermal: simplify getting .driver_data thermal: remove unused function parameter
-
git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermalLinus Torvalds authored
Pull thermal SoC updates from Eduardo Valentin: - Tegra DT binding documentation for Tegra194 - Armada now supports ap806 and cp110 - RCAR thermal now supports R8A774C0 and R8A77990 - Fixes on thermal_hwmon, IMX, generic-ADC, ST, RCAR, Broadcom, Uniphier, QCOM, Tegra, PowerClamp, and Armada thermal drivers. * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal: (22 commits) thermal: generic-adc: Fix adc to temp interpolation thermal: rcar_thermal: add R8A77990 support dt-bindings: thermal: rcar-thermal: add R8A77990 support thermal: rcar_thermal: add R8A774C0 support dt-bindings: thermal: rcar-thermal: add R8A774C0 support dt-bindings: cp110: document the thermal interrupt capabilities dt-bindings: ap806: document the thermal interrupt capabilities MAINTAINERS: thermal: add entry for Marvell MVEBU thermal driver thermal: armada: add overheat interrupt support thermal: st: fix Makefile typo thermal: uniphier: Convert to SPDX identifier thermal/intel_powerclamp: Change to use DEFINE_SHOW_ATTRIBUTE macro thermal: tegra: soctherm: Change to use DEFINE_SHOW_ATTRIBUTE macro dt-bindings: thermal: tegra-bpmp: Add Tegra194 support thermal: imx: save one condition block for normal case of nvmem initialization thermal: imx: fix for dependency on cpu-freq thermal: tsens: qcom: do not create duplicate regmap debugfs entries thermal: armada: Use PTR_ERR_OR_ZERO in armada_thermal_probe_legacy() dt-bindings: thermal: rcar-gen3-thermal: All variants use 3 interrupts thermal: broadcom: use devm_thermal_zone_of_sensor_register ...
-
- 05 Jan, 2019 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-traceLinus Torvalds authored
Pull ftrace sh build fix from Steven Rostedt: "It appears that the zero-day bot did find a bug in my sh build. And that I didn't have the bad code in my config file when I cross compiled it, although there are a few other errors in sh that makes it not build for me, I missed that I added one more" * tag 'trace-v4.21-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: sh: ftrace: Fix missing parenthesis in WARN_ON()
-
git://git.samba.org/sfrench/cifs-2.6Linus Torvalds authored
Pull smb3 fixes from Steve French: "Three fixes, one for stable, one adds the (most secure) SMB3.1.1 dialect to default list requested" * tag '4.21-smb3-small-fixes' of git://git.samba.org/sfrench/cifs-2.6: smb3: add smb3.1.1 to default dialect list cifs: fix confusing warning message on reconnect smb3: fix large reads on encrypted connections
-
git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds authored
Pull iomap maintainer update from Darrick Wong: "Christoph Hellwig and I have decided to take responsibility for the fs iomap code rather than let it languish further" * tag 'iomap-4.21-merge-3' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: iomap: take responsibility for the filesystem iomap code
-
git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds authored
Pull xfs fixlets from Darrick Wong: "Remove a couple of unnecessary local variables" * tag 'xfs-4.21-merge-3' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: xfs: xfs_fsops: drop useless LIST_HEAD xfs: xfs_buf: drop useless LIST_HEAD
-
git://github.com/ceph/ceph-clientLinus Torvalds authored
Pull ceph updates from Ilya Dryomov: "A fairly quiet round: a couple of messenger performance improvements from myself and a few cap handling fixes from Zheng" * tag 'ceph-for-4.21-rc1' of git://github.com/ceph/ceph-client: ceph: don't encode inode pathes into reconnect message ceph: update wanted caps after resuming stale session ceph: skip updating 'wanted' caps if caps are already issued ceph: don't request excl caps when mount is readonly ceph: don't update importing cap's mseq when handing cap export libceph: switch more to bool in ceph_tcp_sendmsg() libceph: use MSG_SENDPAGE_NOTLAST with ceph_tcp_sendpage() libceph: use sock_no_sendpage() as a fallback in ceph_tcp_sendpage() libceph: drop last_piece logic from write_partial_message_data() ceph: remove redundant assignment ceph: cleanup splice_dentry()
-