Commit 958ef1e3 authored by Petr Mladek's avatar Petr Mladek Committed by Jiri Kosina

livepatch: Simplify API by removing registration step

The possibility to re-enable a registered patch was useful for immediate
patches where the livepatch module had to stay until the system reboot.
The improved consistency model allows to achieve the same result by
unloading and loading the livepatch module again.

Also we are going to add a feature called atomic replace. It will allow
to create a patch that would replace all already registered patches.
The aim is to handle dependent patches more securely. It will obsolete
the stack of patches that helped to handle the dependencies so far.
Then it might be unclear when a cumulative patch re-enabling is safe.

It would be complicated to support the many modes. Instead we could
actually make the API and code easier to understand.

Therefore, remove the two step public API. All the checks and init calls
are moved from klp_register_patch() to klp_enabled_patch(). Also the patch
is automatically freed, including the sysfs interface when the transition
to the disabled state is completed.

As a result, there is never a disabled patch on the top of the stack.
Therefore we do not need to check the stack in __klp_enable_patch().
And we could simplify the check in __klp_disable_patch().

Also the API and logic is much easier. It is enough to call
klp_enable_patch() in module_init() call. The patch can be disabled
by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
can be removed once the transition finishes and sysfs interface is freed.

The only problem is how to free the structures and kobjects safely.
The operation is triggered from the sysfs interface. We could not put
the related kobject from there because it would cause lock inversion
between klp_mutex and kernfs locks, see kn->count lockdep map.

Therefore, offload the free task to a workqueue. It is perfectly fine:

  + The patch can no longer be used in the livepatch operations.

  + The module could not be removed until the free operation finishes
    and module_put() is called.

  + The operation is asynchronous already when the first
    klp_try_complete_transition() fails and another call
    is queued with a delay.
Suggested-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: default avatarPetr Mladek <pmladek@suse.com>
Acked-by: default avatarMiroslav Benes <mbenes@suse.cz>
Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
parent 68007289
...@@ -12,12 +12,11 @@ Table of Contents: ...@@ -12,12 +12,11 @@ Table of Contents:
4. Livepatch module 4. Livepatch module
4.1. New functions 4.1. New functions
4.2. Metadata 4.2. Metadata
4.3. Livepatch module handling
5. Livepatch life-cycle 5. Livepatch life-cycle
5.1. Registration 5.1. Loading
5.2. Enabling 5.2. Enabling
5.3. Disabling 5.3. Disabling
5.4. Unregistration 5.4. Removing
6. Sysfs 6. Sysfs
7. Limitations 7. Limitations
...@@ -298,117 +297,89 @@ into three levels: ...@@ -298,117 +297,89 @@ into three levels:
see the "Consistency model" section. see the "Consistency model" section.
4.3. Livepatch module handling
------------------------------
The usual behavior is that the new functions will get used when
the livepatch module is loaded. For this, the module init() function
has to register the patch (struct klp_patch) and enable it. See the
section "Livepatch life-cycle" below for more details about these
two operations.
Module removal is only safe when there are no users of the underlying
functions. This is the reason why the force feature permanently disables
the removal. The forced tasks entered the functions but we cannot say
that they returned back. Therefore it cannot be decided when the
livepatch module can be safely removed. When the system is successfully
transitioned to a new patch state (patched/unpatched) without being
forced it is guaranteed that no task sleeps or runs in the old code.
5. Livepatch life-cycle 5. Livepatch life-cycle
======================= =======================
Livepatching defines four basic operations that define the life cycle of each Livepatching can be described by four basic operations:
live patch: registration, enabling, disabling and unregistration. There are loading, enabling, disabling, removing.
several reasons why it is done this way.
First, the patch is applied only when all patched symbols for already
loaded objects are found. The error handling is much easier if this
check is done before particular functions get redirected.
Second, it might take some time until the entire system is migrated with
the hybrid consistency model being used. The patch revert might block
the livepatch module removal for too long. Therefore it is useful to
revert the patch using a separate operation that might be called
explicitly. But it does not make sense to remove all information until
the livepatch module is really removed.
5.1. Loading
------------
5.1. Registration The only reasonable way is to enable the patch when the livepatch kernel
----------------- module is being loaded. For this, klp_enable_patch() has to be called
in the module_init() callback. There are two main reasons:
Each patch first has to be registered using klp_register_patch(). This makes First, only the module has an easy access to the related struct klp_patch.
the patch known to the livepatch framework. Also it does some preliminary
computing and checks.
In particular, the patch is added into the list of known patches. The Second, the error code might be used to refuse loading the module when
addresses of the patched functions are found according to their names. the patch cannot get enabled.
The special relocations, mentioned in the section "New functions", are
applied. The relevant entries are created under
/sys/kernel/livepatch/<name>. The patch is rejected when any operation
fails.
5.2. Enabling 5.2. Enabling
------------- -------------
Registered patches might be enabled either by calling klp_enable_patch() or The livepatch gets enabled by calling klp_enable_patch() from
by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will the module_init() callback. The system will start using the new
start using the new implementation of the patched functions at this stage. implementation of the patched functions at this stage.
When a patch is enabled, livepatch enters into a transition state where First, the addresses of the patched functions are found according to their
tasks are converging to the patched state. This is indicated by a value names. The special relocations, mentioned in the section "New functions",
of '1' in /sys/kernel/livepatch/<name>/transition. Once all tasks have are applied. The relevant entries are created under
been patched, the 'transition' value changes to '0'. For more /sys/kernel/livepatch/<name>. The patch is rejected when any above
information about this process, see the "Consistency model" section. operation fails.
If an original function is patched for the first time, a function Second, livepatch enters into a transition state where tasks are converging
specific struct klp_ops is created and an universal ftrace handler is to the patched state. If an original function is patched for the first
registered. time, a function specific struct klp_ops is created and an universal
ftrace handler is registered[*]. This stage is indicated by a value of '1'
in /sys/kernel/livepatch/<name>/transition. For more information about
this process, see the "Consistency model" section.
Functions might be patched multiple times. The ftrace handler is registered Finally, once all tasks have been patched, the 'transition' value changes
only once for the given function. Further patches just add an entry to the to '0'.
list (see field `func_stack`) of the struct klp_ops. The last added
entry is chosen by the ftrace handler and becomes the active function
replacement.
Note that the patches might be enabled in a different order than they were [*] Note that functions might be patched multiple times. The ftrace handler
registered. is registered only once for a given function. Further patches just add
an entry to the list (see field `func_stack`) of the struct klp_ops.
The right implementation is selected by the ftrace handler, see
the "Consistency model" section.
5.3. Disabling 5.3. Disabling
-------------- --------------
Enabled patches might get disabled either by calling klp_disable_patch() or Enabled patches might get disabled by writing '0' to
by writing '0' to /sys/kernel/livepatch/<name>/enabled. At this stage /sys/kernel/livepatch/<name>/enabled.
either the code from the previously enabled patch or even the original
code gets used.
When a patch is disabled, livepatch enters into a transition state where First, livepatch enters into a transition state where tasks are converging
tasks are converging to the unpatched state. This is indicated by a to the unpatched state. The system starts using either the code from
value of '1' in /sys/kernel/livepatch/<name>/transition. Once all tasks the previously enabled patch or even the original one. This stage is
have been unpatched, the 'transition' value changes to '0'. For more indicated by a value of '1' in /sys/kernel/livepatch/<name>/transition.
information about this process, see the "Consistency model" section. For more information about this process, see the "Consistency model"
section.
Here all the functions (struct klp_func) associated with the to-be-disabled Second, once all tasks have been unpatched, the 'transition' value changes
to '0'. All the functions (struct klp_func) associated with the to-be-disabled
patch are removed from the corresponding struct klp_ops. The ftrace handler patch are removed from the corresponding struct klp_ops. The ftrace handler
is unregistered and the struct klp_ops is freed when the func_stack list is unregistered and the struct klp_ops is freed when the func_stack list
becomes empty. becomes empty.
Patches must be disabled in exactly the reverse order in which they were Third, the sysfs interface is destroyed.
enabled. It makes the problem and the implementation much easier.
Note that patches must be disabled in exactly the reverse order in which
they were enabled. It makes the problem and the implementation much easier.
5.4. Unregistration
-------------------
Disabled patches might be unregistered by calling klp_unregister_patch(). 5.4. Removing
This can be done only when the patch is disabled and the code is no longer -------------
used. It must be called before the livepatch module gets unloaded.
At this stage, all the relevant sys-fs entries are removed and the patch Module removal is only safe when there are no users of functions provided
is removed from the list of known patches. by the module. This is the reason why the force feature permanently
disables the removal. Only when the system is successfully transitioned
to a new patch state (patched/unpatched) without being forced it is
guaranteed that no task sleeps or runs in the old code.
6. Sysfs 6. Sysfs
......
...@@ -139,11 +139,12 @@ struct klp_object { ...@@ -139,11 +139,12 @@ struct klp_object {
* struct klp_patch - patch structure for live patching * struct klp_patch - patch structure for live patching
* @mod: reference to the live patch module * @mod: reference to the live patch module
* @objs: object entries for kernel objects to be patched * @objs: object entries for kernel objects to be patched
* @list: list node for global list of registered patches * @list: list node for global list of actively used patches
* @kobj: kobject for sysfs resources * @kobj: kobject for sysfs resources
* @kobj_added: @kobj has been added and needs freeing * @kobj_added: @kobj has been added and needs freeing
* @enabled: the patch is enabled (but operation may be incomplete) * @enabled: the patch is enabled (but operation may be incomplete)
* @forced: was involved in a forced transition * @forced: was involved in a forced transition
* @free_work: patch cleanup from workqueue-context
* @finish: for waiting till it is safe to remove the patch module * @finish: for waiting till it is safe to remove the patch module
*/ */
struct klp_patch { struct klp_patch {
...@@ -157,6 +158,7 @@ struct klp_patch { ...@@ -157,6 +158,7 @@ struct klp_patch {
bool kobj_added; bool kobj_added;
bool enabled; bool enabled;
bool forced; bool forced;
struct work_struct free_work;
struct completion finish; struct completion finish;
}; };
...@@ -168,10 +170,7 @@ struct klp_patch { ...@@ -168,10 +170,7 @@ struct klp_patch {
func->old_name || func->new_func || func->old_sympos; \ func->old_name || func->new_func || func->old_sympos; \
func++) func++)
int klp_register_patch(struct klp_patch *);
int klp_unregister_patch(struct klp_patch *);
int klp_enable_patch(struct klp_patch *); int klp_enable_patch(struct klp_patch *);
int klp_disable_patch(struct klp_patch *);
void arch_klp_init_object_loaded(struct klp_patch *patch, void arch_klp_init_object_loaded(struct klp_patch *patch,
struct klp_object *obj); struct klp_object *obj);
......
...@@ -45,7 +45,11 @@ ...@@ -45,7 +45,11 @@
*/ */
DEFINE_MUTEX(klp_mutex); DEFINE_MUTEX(klp_mutex);
/* Registered patches */ /*
* Actively used patches: enabled or in transition. Note that replaced
* or disabled patches are not listed even though the related kernel
* module still can be loaded.
*/
LIST_HEAD(klp_patches); LIST_HEAD(klp_patches);
static struct kobject *klp_root_kobj; static struct kobject *klp_root_kobj;
...@@ -83,17 +87,6 @@ static void klp_find_object_module(struct klp_object *obj) ...@@ -83,17 +87,6 @@ static void klp_find_object_module(struct klp_object *obj)
mutex_unlock(&module_mutex); mutex_unlock(&module_mutex);
} }
static bool klp_is_patch_registered(struct klp_patch *patch)
{
struct klp_patch *mypatch;
list_for_each_entry(mypatch, &klp_patches, list)
if (mypatch == patch)
return true;
return false;
}
static bool klp_initialized(void) static bool klp_initialized(void)
{ {
return !!klp_root_kobj; return !!klp_root_kobj;
...@@ -292,7 +285,6 @@ static int klp_write_object_relocations(struct module *pmod, ...@@ -292,7 +285,6 @@ static int klp_write_object_relocations(struct module *pmod,
* /sys/kernel/livepatch/<patch>/<object>/<function,sympos> * /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
*/ */
static int __klp_disable_patch(struct klp_patch *patch); static int __klp_disable_patch(struct klp_patch *patch);
static int __klp_enable_patch(struct klp_patch *patch);
static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr, static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count) const char *buf, size_t count)
...@@ -309,40 +301,32 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr, ...@@ -309,40 +301,32 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
mutex_lock(&klp_mutex); mutex_lock(&klp_mutex);
if (!klp_is_patch_registered(patch)) {
/*
* Module with the patch could either disappear meanwhile or is
* not properly initialized yet.
*/
ret = -EINVAL;
goto err;
}
if (patch->enabled == enabled) { if (patch->enabled == enabled) {
/* already in requested state */ /* already in requested state */
ret = -EINVAL; ret = -EINVAL;
goto err; goto out;
} }
if (patch == klp_transition_patch) { /*
* Allow to reverse a pending transition in both ways. It might be
* necessary to complete the transition without forcing and breaking
* the system integrity.
*
* Do not allow to re-enable a disabled patch.
*/
if (patch == klp_transition_patch)
klp_reverse_transition(); klp_reverse_transition();
} else if (enabled) { else if (!enabled)
ret = __klp_enable_patch(patch);
if (ret)
goto err;
} else {
ret = __klp_disable_patch(patch); ret = __klp_disable_patch(patch);
if (ret) else
goto err; ret = -EINVAL;
}
out:
mutex_unlock(&klp_mutex); mutex_unlock(&klp_mutex);
if (ret)
return ret;
return count; return count;
err:
mutex_unlock(&klp_mutex);
return ret;
} }
static ssize_t enabled_show(struct kobject *kobj, static ssize_t enabled_show(struct kobject *kobj,
...@@ -508,7 +492,7 @@ static void klp_free_objects(struct klp_patch *patch) ...@@ -508,7 +492,7 @@ static void klp_free_objects(struct klp_patch *patch)
* The operation must be completed by calling klp_free_patch_finish() * The operation must be completed by calling klp_free_patch_finish()
* outside klp_mutex. * outside klp_mutex.
*/ */
static void klp_free_patch_start(struct klp_patch *patch) void klp_free_patch_start(struct klp_patch *patch)
{ {
if (!list_empty(&patch->list)) if (!list_empty(&patch->list))
list_del(&patch->list); list_del(&patch->list);
...@@ -536,6 +520,23 @@ static void klp_free_patch_finish(struct klp_patch *patch) ...@@ -536,6 +520,23 @@ static void klp_free_patch_finish(struct klp_patch *patch)
kobject_put(&patch->kobj); kobject_put(&patch->kobj);
wait_for_completion(&patch->finish); wait_for_completion(&patch->finish);
} }
/* Put the module after the last access to struct klp_patch. */
if (!patch->forced)
module_put(patch->mod);
}
/*
* The livepatch might be freed from sysfs interface created by the patch.
* This work allows to wait until the interface is destroyed in a separate
* context.
*/
static void klp_free_patch_work_fn(struct work_struct *work)
{
struct klp_patch *patch =
container_of(work, struct klp_patch, free_work);
klp_free_patch_finish(patch);
} }
static int klp_init_func(struct klp_object *obj, struct klp_func *func) static int klp_init_func(struct klp_object *obj, struct klp_func *func)
...@@ -661,6 +662,7 @@ static int klp_init_patch_early(struct klp_patch *patch) ...@@ -661,6 +662,7 @@ static int klp_init_patch_early(struct klp_patch *patch)
patch->kobj_added = false; patch->kobj_added = false;
patch->enabled = false; patch->enabled = false;
patch->forced = false; patch->forced = false;
INIT_WORK(&patch->free_work, klp_free_patch_work_fn);
init_completion(&patch->finish); init_completion(&patch->finish);
klp_for_each_object(patch, obj) { klp_for_each_object(patch, obj) {
...@@ -673,6 +675,9 @@ static int klp_init_patch_early(struct klp_patch *patch) ...@@ -673,6 +675,9 @@ static int klp_init_patch_early(struct klp_patch *patch)
func->kobj_added = false; func->kobj_added = false;
} }
if (!try_module_get(patch->mod))
return -ENODEV;
return 0; return 0;
} }
...@@ -681,115 +686,22 @@ static int klp_init_patch(struct klp_patch *patch) ...@@ -681,115 +686,22 @@ static int klp_init_patch(struct klp_patch *patch)
struct klp_object *obj; struct klp_object *obj;
int ret; int ret;
mutex_lock(&klp_mutex);
ret = klp_init_patch_early(patch);
if (ret) {
mutex_unlock(&klp_mutex);
return ret;
}
ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch, ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
klp_root_kobj, "%s", patch->mod->name); klp_root_kobj, "%s", patch->mod->name);
if (ret) { if (ret)
mutex_unlock(&klp_mutex);
return ret; return ret;
}
patch->kobj_added = true; patch->kobj_added = true;
klp_for_each_object(patch, obj) { klp_for_each_object(patch, obj) {
ret = klp_init_object(patch, obj); ret = klp_init_object(patch, obj);
if (ret) if (ret)
goto free; return ret;
} }
list_add_tail(&patch->list, &klp_patches); list_add_tail(&patch->list, &klp_patches);
mutex_unlock(&klp_mutex);
return 0;
free:
klp_free_patch_start(patch);
mutex_unlock(&klp_mutex);
klp_free_patch_finish(patch);
return ret;
}
/**
* klp_unregister_patch() - unregisters a patch
* @patch: Disabled patch to be unregistered
*
* Frees the data structures and removes the sysfs interface.
*
* Return: 0 on success, otherwise error
*/
int klp_unregister_patch(struct klp_patch *patch)
{
int ret;
mutex_lock(&klp_mutex);
if (!klp_is_patch_registered(patch)) {
ret = -EINVAL;
goto err;
}
if (patch->enabled) {
ret = -EBUSY;
goto err;
}
klp_free_patch_start(patch);
mutex_unlock(&klp_mutex);
klp_free_patch_finish(patch);
return 0; return 0;
err:
mutex_unlock(&klp_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(klp_unregister_patch);
/**
* klp_register_patch() - registers a patch
* @patch: Patch to be registered
*
* Initializes the data structure associated with the patch and
* creates the sysfs interface.
*
* There is no need to take the reference on the patch module here. It is done
* later when the patch is enabled.
*
* Return: 0 on success, otherwise error
*/
int klp_register_patch(struct klp_patch *patch)
{
if (!patch || !patch->mod)
return -EINVAL;
if (!is_livepatch_module(patch->mod)) {
pr_err("module %s is not marked as a livepatch module\n",
patch->mod->name);
return -EINVAL;
}
if (!klp_initialized())
return -ENODEV;
if (!klp_have_reliable_stack()) {
pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
return -ENOSYS;
}
return klp_init_patch(patch);
} }
EXPORT_SYMBOL_GPL(klp_register_patch);
static int __klp_disable_patch(struct klp_patch *patch) static int __klp_disable_patch(struct klp_patch *patch)
{ {
...@@ -802,8 +714,7 @@ static int __klp_disable_patch(struct klp_patch *patch) ...@@ -802,8 +714,7 @@ static int __klp_disable_patch(struct klp_patch *patch)
return -EBUSY; return -EBUSY;
/* enforce stacking: only the last enabled patch can be disabled */ /* enforce stacking: only the last enabled patch can be disabled */
if (!list_is_last(&patch->list, &klp_patches) && if (!list_is_last(&patch->list, &klp_patches))
list_next_entry(patch, list)->enabled)
return -EBUSY; return -EBUSY;
klp_init_transition(patch, KLP_UNPATCHED); klp_init_transition(patch, KLP_UNPATCHED);
...@@ -822,44 +733,12 @@ static int __klp_disable_patch(struct klp_patch *patch) ...@@ -822,44 +733,12 @@ static int __klp_disable_patch(struct klp_patch *patch)
smp_wmb(); smp_wmb();
klp_start_transition(); klp_start_transition();
klp_try_complete_transition();
patch->enabled = false; patch->enabled = false;
klp_try_complete_transition();
return 0; return 0;
} }
/**
* klp_disable_patch() - disables a registered patch
* @patch: The registered, enabled patch to be disabled
*
* Unregisters the patched functions from ftrace.
*
* Return: 0 on success, otherwise error
*/
int klp_disable_patch(struct klp_patch *patch)
{
int ret;
mutex_lock(&klp_mutex);
if (!klp_is_patch_registered(patch)) {
ret = -EINVAL;
goto err;
}
if (!patch->enabled) {
ret = -EINVAL;
goto err;
}
ret = __klp_disable_patch(patch);
err:
mutex_unlock(&klp_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(klp_disable_patch);
static int __klp_enable_patch(struct klp_patch *patch) static int __klp_enable_patch(struct klp_patch *patch)
{ {
struct klp_object *obj; struct klp_object *obj;
...@@ -871,17 +750,8 @@ static int __klp_enable_patch(struct klp_patch *patch) ...@@ -871,17 +750,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
if (WARN_ON(patch->enabled)) if (WARN_ON(patch->enabled))
return -EINVAL; return -EINVAL;
/* enforce stacking: only the first disabled patch can be enabled */ if (!patch->kobj_added)
if (patch->list.prev != &klp_patches && return -EINVAL;
!list_prev_entry(patch, list)->enabled)
return -EBUSY;
/*
* A reference is taken on the patch module to prevent it from being
* unloaded.
*/
if (!try_module_get(patch->mod))
return -ENODEV;
pr_notice("enabling patch '%s'\n", patch->mod->name); pr_notice("enabling patch '%s'\n", patch->mod->name);
...@@ -916,8 +786,8 @@ static int __klp_enable_patch(struct klp_patch *patch) ...@@ -916,8 +786,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
} }
klp_start_transition(); klp_start_transition();
klp_try_complete_transition();
patch->enabled = true; patch->enabled = true;
klp_try_complete_transition();
return 0; return 0;
err: err:
...@@ -928,11 +798,15 @@ static int __klp_enable_patch(struct klp_patch *patch) ...@@ -928,11 +798,15 @@ static int __klp_enable_patch(struct klp_patch *patch)
} }
/** /**
* klp_enable_patch() - enables a registered patch * klp_enable_patch() - enable the livepatch
* @patch: The registered, disabled patch to be enabled * @patch: patch to be enabled
* *
* Performs the needed symbol lookups and code relocations, * Initializes the data structure associated with the patch, creates the sysfs
* then registers the patched functions with ftrace. * interface, performs the needed symbol lookups and code relocations,
* registers the patched functions with ftrace.
*
* This function is supposed to be called from the livepatch module_init()
* callback.
* *
* Return: 0 on success, otherwise error * Return: 0 on success, otherwise error
*/ */
...@@ -940,17 +814,51 @@ int klp_enable_patch(struct klp_patch *patch) ...@@ -940,17 +814,51 @@ int klp_enable_patch(struct klp_patch *patch)
{ {
int ret; int ret;
if (!patch || !patch->mod)
return -EINVAL;
if (!is_livepatch_module(patch->mod)) {
pr_err("module %s is not marked as a livepatch module\n",
patch->mod->name);
return -EINVAL;
}
if (!klp_initialized())
return -ENODEV;
if (!klp_have_reliable_stack()) {
pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
return -ENOSYS;
}
mutex_lock(&klp_mutex); mutex_lock(&klp_mutex);
if (!klp_is_patch_registered(patch)) { ret = klp_init_patch_early(patch);
ret = -EINVAL; if (ret) {
goto err; mutex_unlock(&klp_mutex);
return ret;
} }
ret = klp_init_patch(patch);
if (ret)
goto err;
ret = __klp_enable_patch(patch); ret = __klp_enable_patch(patch);
if (ret)
goto err;
mutex_unlock(&klp_mutex);
return 0;
err: err:
klp_free_patch_start(patch);
mutex_unlock(&klp_mutex); mutex_unlock(&klp_mutex);
klp_free_patch_finish(patch);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(klp_enable_patch); EXPORT_SYMBOL_GPL(klp_enable_patch);
......
...@@ -7,6 +7,8 @@ ...@@ -7,6 +7,8 @@
extern struct mutex klp_mutex; extern struct mutex klp_mutex;
extern struct list_head klp_patches; extern struct list_head klp_patches;
void klp_free_patch_start(struct klp_patch *patch);
static inline bool klp_is_object_loaded(struct klp_object *obj) static inline bool klp_is_object_loaded(struct klp_object *obj)
{ {
return !obj->name || obj->mod; return !obj->name || obj->mod;
......
...@@ -134,13 +134,6 @@ static void klp_complete_transition(void) ...@@ -134,13 +134,6 @@ static void klp_complete_transition(void)
pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name, pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
/*
* patch->forced set implies unbounded increase of module's ref count if
* the module is disabled/enabled in a loop.
*/
if (!klp_transition_patch->forced && klp_target_state == KLP_UNPATCHED)
module_put(klp_transition_patch->mod);
klp_target_state = KLP_UNDEFINED; klp_target_state = KLP_UNDEFINED;
klp_transition_patch = NULL; klp_transition_patch = NULL;
} }
...@@ -357,6 +350,7 @@ void klp_try_complete_transition(void) ...@@ -357,6 +350,7 @@ void klp_try_complete_transition(void)
{ {
unsigned int cpu; unsigned int cpu;
struct task_struct *g, *task; struct task_struct *g, *task;
struct klp_patch *patch;
bool complete = true; bool complete = true;
WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED); WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
...@@ -405,7 +399,18 @@ void klp_try_complete_transition(void) ...@@ -405,7 +399,18 @@ void klp_try_complete_transition(void)
} }
/* we're done, now cleanup the data structures */ /* we're done, now cleanup the data structures */
patch = klp_transition_patch;
klp_complete_transition(); klp_complete_transition();
/*
* It would make more sense to free the patch in
* klp_complete_transition() but it is called also
* from klp_cancel_transition().
*/
if (!patch->enabled) {
klp_free_patch_start(patch);
schedule_work(&patch->free_work);
}
} }
/* /*
......
...@@ -195,22 +195,11 @@ static struct klp_patch patch = { ...@@ -195,22 +195,11 @@ static struct klp_patch patch = {
static int livepatch_callbacks_demo_init(void) static int livepatch_callbacks_demo_init(void)
{ {
int ret; return klp_enable_patch(&patch);
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
} }
static void livepatch_callbacks_demo_exit(void) static void livepatch_callbacks_demo_exit(void)
{ {
WARN_ON(klp_unregister_patch(&patch));
} }
module_init(livepatch_callbacks_demo_init); module_init(livepatch_callbacks_demo_init);
......
...@@ -69,22 +69,11 @@ static struct klp_patch patch = { ...@@ -69,22 +69,11 @@ static struct klp_patch patch = {
static int livepatch_init(void) static int livepatch_init(void)
{ {
int ret; return klp_enable_patch(&patch);
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
} }
static void livepatch_exit(void) static void livepatch_exit(void)
{ {
WARN_ON(klp_unregister_patch(&patch));
} }
module_init(livepatch_init); module_init(livepatch_init);
......
...@@ -157,25 +157,13 @@ static struct klp_patch patch = { ...@@ -157,25 +157,13 @@ static struct klp_patch patch = {
static int livepatch_shadow_fix1_init(void) static int livepatch_shadow_fix1_init(void)
{ {
int ret; return klp_enable_patch(&patch);
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
} }
static void livepatch_shadow_fix1_exit(void) static void livepatch_shadow_fix1_exit(void)
{ {
/* Cleanup any existing SV_LEAK shadow variables */ /* Cleanup any existing SV_LEAK shadow variables */
klp_shadow_free_all(SV_LEAK, livepatch_fix1_dummy_leak_dtor); klp_shadow_free_all(SV_LEAK, livepatch_fix1_dummy_leak_dtor);
WARN_ON(klp_unregister_patch(&patch));
} }
module_init(livepatch_shadow_fix1_init); module_init(livepatch_shadow_fix1_init);
......
...@@ -129,25 +129,13 @@ static struct klp_patch patch = { ...@@ -129,25 +129,13 @@ static struct klp_patch patch = {
static int livepatch_shadow_fix2_init(void) static int livepatch_shadow_fix2_init(void)
{ {
int ret; return klp_enable_patch(&patch);
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
} }
static void livepatch_shadow_fix2_exit(void) static void livepatch_shadow_fix2_exit(void)
{ {
/* Cleanup any existing SV_COUNTER shadow variables */ /* Cleanup any existing SV_COUNTER shadow variables */
klp_shadow_free_all(SV_COUNTER, NULL); klp_shadow_free_all(SV_COUNTER, NULL);
WARN_ON(klp_unregister_patch(&patch));
} }
module_init(livepatch_shadow_fix2_init); module_init(livepatch_shadow_fix2_init);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment