summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJason Baron <jbaron@akamai.com>2019-01-09 07:43:25 -0500
committerJiri Kosina <jkosina@suse.cz>2019-01-11 14:51:24 -0500
commite1452b607c48c642caf57299f4da83aa002f8533 (patch)
tree487267602c0c6cfaeb247950df4cbb24f435ae6a
parent20e55025958e18e671d92c7adea00c301ac93c43 (diff)
livepatch: Add atomic replace
Sometimes we would like to revert a particular fix. Currently, this is not easy because we want to keep all other fixes active and we could revert only the last applied patch. One solution would be to apply new patch that implemented all the reverted functions like in the original code. It would work as expected but there will be unnecessary redirections. In addition, it would also require knowing which functions need to be reverted at build time. Another problem is when there are many patches that touch the same functions. There might be dependencies between patches that are not enforced on the kernel side. Also it might be pretty hard to actually prepare the patch and ensure compatibility with the other patches. Atomic replace && cumulative patches: A better solution would be to create cumulative patch and say that it replaces all older ones. This patch adds a new "replace" flag to struct klp_patch. When it is enabled, a set of 'nop' klp_func will be dynamically created for all functions that are already being patched but that will no longer be modified by the new patch. They are used as a new target during the patch transition. The idea is to handle Nops' structures like the static ones. When the dynamic structures are allocated, we initialize all values that are normally statically defined. The only exception is "new_func" in struct klp_func. It has to point to the original function and the address is known only when the object (module) is loaded. Note that we really need to set it. The address is used, for example, in klp_check_stack_func(). Nevertheless we still need to distinguish the dynamically allocated structures in some operations. For this, we add "nop" flag into struct klp_func and "dynamic" flag into struct klp_object. They need special handling in the following situations: + The structures are added into the lists of objects and functions immediately. In fact, the lists were created for this purpose. + The address of the original function is known only when the patched object (module) is loaded. Therefore it is copied later in klp_init_object_loaded(). + The ftrace handler must not set PC to func->new_func. It would cause infinite loop because the address points back to the beginning of the original function. + The various free() functions must free the structure itself. Note that other ways to detect the dynamic structures are not considered safe. For example, even the statically defined struct klp_object might include empty funcs array. It might be there just to run some callbacks. Also note that the safe iterator must be used in the free() functions. Otherwise already freed structures might get accessed. Special callbacks handling: The callbacks from the replaced patches are _not_ called by intention. It would be pretty hard to define a reasonable semantic and implement it. It might even be counter-productive. The new patch is cumulative. It is supposed to include most of the changes from older patches. In most cases, it will not want to call pre_unpatch() post_unpatch() callbacks from the replaced patches. It would disable/break things for no good reasons. Also it should be easier to handle various scenarios in a single script in the new patch than think about interactions caused by running many scripts from older patches. Not to say that the old scripts even would not expect to be called in this situation. Removing replaced patches: One nice effect of the cumulative patches is that the code from the older patches is no longer used. Therefore the replaced patches can be removed. It has several advantages: + Nops' structs will no longer be necessary and might be removed. This would save memory, restore performance (no ftrace handler), allow clear view on what is really patched. + Disabling the patch will cause using the original code everywhere. Therefore the livepatch callbacks could handle only one scenario. Note that the complication is already complex enough when the patch gets enabled. It is currently solved by calling callbacks only from the new cumulative patch. + The state is clean in both the sysfs interface and lsmod. The modules with the replaced livepatches might even get removed from the system. Some people actually expected this behavior from the beginning. After all a cumulative patch is supposed to "completely" replace an existing one. It is like when a new version of an application replaces an older one. This patch does the first step. It removes the replaced patches from the list of patches. It is safe. The consistency model ensures that they are no longer used. By other words, each process works only with the structures from klp_transition_patch. The removal is done by a special function. It combines actions done by __disable_patch() and klp_complete_transition(). But it is a fast track without all the transaction-related stuff. Signed-off-by: Jason Baron <jbaron@akamai.com> [pmladek@suse.com: Split, reuse existing code, simplified] Signed-off-by: Petr Mladek <pmladek@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-rw-r--r--Documentation/livepatch/livepatch.txt31
-rw-r--r--include/linux/livepatch.h12
-rw-r--r--kernel/livepatch/core.c232
-rw-r--r--kernel/livepatch/core.h1
-rw-r--r--kernel/livepatch/patch.c8
-rw-r--r--kernel/livepatch/transition.c3
6 files changed, 273 insertions, 14 deletions
diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 8f56490a4bb6..2a70f43166f6 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -15,8 +15,9 @@ Table of Contents:
155. Livepatch life-cycle 155. Livepatch life-cycle
16 5.1. Loading 16 5.1. Loading
17 5.2. Enabling 17 5.2. Enabling
18 5.3. Disabling 18 5.3. Replacing
19 5.4. Removing 19 5.4. Disabling
20 5.5. Removing
206. Sysfs 216. Sysfs
217. Limitations 227. Limitations
22 23
@@ -300,8 +301,12 @@ into three levels:
3005. Livepatch life-cycle 3015. Livepatch life-cycle
301======================= 302=======================
302 303
303Livepatching can be described by four basic operations: 304Livepatching can be described by five basic operations:
304loading, enabling, disabling, removing. 305loading, enabling, replacing, disabling, removing.
306
307Where the replacing and the disabling operations are mutually
308exclusive. They have the same result for the given patch but
309not for the system.
305 310
306 311
3075.1. Loading 3125.1. Loading
@@ -347,7 +352,21 @@ to '0'.
347 the "Consistency model" section. 352 the "Consistency model" section.
348 353
349 354
3505.3. Disabling 3555.3. Replacing
356--------------
357
358All enabled patches might get replaced by a cumulative patch that
359has the .replace flag set.
360
361Once the new patch is enabled and the 'transition' finishes then
362all the functions (struct klp_func) associated with the replaced
363patches are removed from the corresponding struct klp_ops. Also
364the ftrace handler is unregistered and the struct klp_ops is
365freed when the related function is not modified by the new patch
366and func_stack list becomes empty.
367
368
3695.4. Disabling
351-------------- 370--------------
352 371
353Enabled patches might get disabled by writing '0' to 372Enabled patches might get disabled by writing '0' to
@@ -372,7 +391,7 @@ Note that patches must be disabled in exactly the reverse order in which
372they were enabled. It makes the problem and the implementation much easier. 391they were enabled. It makes the problem and the implementation much easier.
373 392
374 393
3755.4. Removing 3945.5. Removing
376------------- 395-------------
377 396
378Module removal is only safe when there are no users of functions provided 397Module removal is only safe when there are no users of functions provided
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index e117e20ff771..53551f470722 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -48,6 +48,7 @@
48 * @old_size: size of the old function 48 * @old_size: size of the old function
49 * @new_size: size of the new function 49 * @new_size: size of the new function
50 * @kobj_added: @kobj has been added and needs freeing 50 * @kobj_added: @kobj has been added and needs freeing
51 * @nop: temporary patch to use the original code again; dyn. allocated
51 * @patched: the func has been added to the klp_ops list 52 * @patched: the func has been added to the klp_ops list
52 * @transition: the func is currently being applied or reverted 53 * @transition: the func is currently being applied or reverted
53 * 54 *
@@ -86,6 +87,7 @@ struct klp_func {
86 struct list_head stack_node; 87 struct list_head stack_node;
87 unsigned long old_size, new_size; 88 unsigned long old_size, new_size;
88 bool kobj_added; 89 bool kobj_added;
90 bool nop;
89 bool patched; 91 bool patched;
90 bool transition; 92 bool transition;
91}; 93};
@@ -125,6 +127,7 @@ struct klp_callbacks {
125 * @mod: kernel module associated with the patched object 127 * @mod: kernel module associated with the patched object
126 * (NULL for vmlinux) 128 * (NULL for vmlinux)
127 * @kobj_added: @kobj has been added and needs freeing 129 * @kobj_added: @kobj has been added and needs freeing
130 * @dynamic: temporary object for nop functions; dynamically allocated
128 * @patched: the object's funcs have been added to the klp_ops list 131 * @patched: the object's funcs have been added to the klp_ops list
129 */ 132 */
130struct klp_object { 133struct klp_object {
@@ -139,6 +142,7 @@ struct klp_object {
139 struct list_head node; 142 struct list_head node;
140 struct module *mod; 143 struct module *mod;
141 bool kobj_added; 144 bool kobj_added;
145 bool dynamic;
142 bool patched; 146 bool patched;
143}; 147};
144 148
@@ -146,6 +150,7 @@ struct klp_object {
146 * struct klp_patch - patch structure for live patching 150 * struct klp_patch - patch structure for live patching
147 * @mod: reference to the live patch module 151 * @mod: reference to the live patch module
148 * @objs: object entries for kernel objects to be patched 152 * @objs: object entries for kernel objects to be patched
153 * @replace: replace all actively used patches
149 * @list: list node for global list of actively used patches 154 * @list: list node for global list of actively used patches
150 * @kobj: kobject for sysfs resources 155 * @kobj: kobject for sysfs resources
151 * @obj_list: dynamic list of the object entries 156 * @obj_list: dynamic list of the object entries
@@ -159,6 +164,7 @@ struct klp_patch {
159 /* external */ 164 /* external */
160 struct module *mod; 165 struct module *mod;
161 struct klp_object *objs; 166 struct klp_object *objs;
167 bool replace;
162 168
163 /* internal */ 169 /* internal */
164 struct list_head list; 170 struct list_head list;
@@ -174,6 +180,9 @@ struct klp_patch {
174#define klp_for_each_object_static(patch, obj) \ 180#define klp_for_each_object_static(patch, obj) \
175 for (obj = patch->objs; obj->funcs || obj->name; obj++) 181 for (obj = patch->objs; obj->funcs || obj->name; obj++)
176 182
183#define klp_for_each_object_safe(patch, obj, tmp_obj) \
184 list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
185
177#define klp_for_each_object(patch, obj) \ 186#define klp_for_each_object(patch, obj) \
178 list_for_each_entry(obj, &patch->obj_list, node) 187 list_for_each_entry(obj, &patch->obj_list, node)
179 188
@@ -182,6 +191,9 @@ struct klp_patch {
182 func->old_name || func->new_func || func->old_sympos; \ 191 func->old_name || func->new_func || func->old_sympos; \
183 func++) 192 func++)
184 193
194#define klp_for_each_func_safe(obj, func, tmp_func) \
195 list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
196
185#define klp_for_each_func(obj, func) \ 197#define klp_for_each_func(obj, func) \
186 list_for_each_entry(func, &obj->func_list, node) 198 list_for_each_entry(func, &obj->func_list, node)
187 199
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 37d0d3645fa6..ecb7660f1d8b 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -92,6 +92,40 @@ static bool klp_initialized(void)
92 return !!klp_root_kobj; 92 return !!klp_root_kobj;
93} 93}
94 94
95static struct klp_func *klp_find_func(struct klp_object *obj,
96 struct klp_func *old_func)
97{
98 struct klp_func *func;
99
100 klp_for_each_func(obj, func) {
101 if ((strcmp(old_func->old_name, func->old_name) == 0) &&
102 (old_func->old_sympos == func->old_sympos)) {
103 return func;
104 }
105 }
106
107 return NULL;
108}
109
110static struct klp_object *klp_find_object(struct klp_patch *patch,
111 struct klp_object *old_obj)
112{
113 struct klp_object *obj;
114
115 klp_for_each_object(patch, obj) {
116 if (klp_is_module(old_obj)) {
117 if (klp_is_module(obj) &&
118 strcmp(old_obj->name, obj->name) == 0) {
119 return obj;
120 }
121 } else if (!klp_is_module(obj)) {
122 return obj;
123 }
124 }
125
126 return NULL;
127}
128
95struct klp_find_arg { 129struct klp_find_arg {
96 const char *objname; 130 const char *objname;
97 const char *name; 131 const char *name;
@@ -418,6 +452,121 @@ static struct attribute *klp_patch_attrs[] = {
418 NULL 452 NULL
419}; 453};
420 454
455static void klp_free_object_dynamic(struct klp_object *obj)
456{
457 kfree(obj->name);
458 kfree(obj);
459}
460
461static struct klp_object *klp_alloc_object_dynamic(const char *name)
462{
463 struct klp_object *obj;
464
465 obj = kzalloc(sizeof(*obj), GFP_KERNEL);
466 if (!obj)
467 return NULL;
468
469 if (name) {
470 obj->name = kstrdup(name, GFP_KERNEL);
471 if (!obj->name) {
472 kfree(obj);
473 return NULL;
474 }
475 }
476
477 INIT_LIST_HEAD(&obj->func_list);
478 obj->dynamic = true;
479
480 return obj;
481}
482
483static void klp_free_func_nop(struct klp_func *func)
484{
485 kfree(func->old_name);
486 kfree(func);
487}
488
489static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func,
490 struct klp_object *obj)
491{
492 struct klp_func *func;
493
494 func = kzalloc(sizeof(*func), GFP_KERNEL);
495 if (!func)
496 return NULL;
497
498 if (old_func->old_name) {
499 func->old_name = kstrdup(old_func->old_name, GFP_KERNEL);
500 if (!func->old_name) {
501 kfree(func);
502 return NULL;
503 }
504 }
505
506 /*
507 * func->new_func is same as func->old_func. These addresses are
508 * set when the object is loaded, see klp_init_object_loaded().
509 */
510 func->old_sympos = old_func->old_sympos;
511 func->nop = true;
512
513 return func;
514}
515
516static int klp_add_object_nops(struct klp_patch *patch,
517 struct klp_object *old_obj)
518{
519 struct klp_object *obj;
520 struct klp_func *func, *old_func;
521
522 obj = klp_find_object(patch, old_obj);
523
524 if (!obj) {
525 obj = klp_alloc_object_dynamic(old_obj->name);
526 if (!obj)
527 return -ENOMEM;
528
529 list_add_tail(&obj->node, &patch->obj_list);
530 }
531
532 klp_for_each_func(old_obj, old_func) {
533 func = klp_find_func(obj, old_func);
534 if (func)
535 continue;
536
537 func = klp_alloc_func_nop(old_func, obj);
538 if (!func)
539 return -ENOMEM;
540
541 list_add_tail(&func->node, &obj->func_list);
542 }
543
544 return 0;
545}
546
547/*
548 * Add 'nop' functions which simply return to the caller to run
549 * the original function. The 'nop' functions are added to a
550 * patch to facilitate a 'replace' mode.
551 */
552static int klp_add_nops(struct klp_patch *patch)
553{
554 struct klp_patch *old_patch;
555 struct klp_object *old_obj;
556
557 list_for_each_entry(old_patch, &klp_patches, list) {
558 klp_for_each_object(old_patch, old_obj) {
559 int err;
560
561 err = klp_add_object_nops(patch, old_obj);
562 if (err)
563 return err;
564 }
565 }
566
567 return 0;
568}
569
421static void klp_kobj_release_patch(struct kobject *kobj) 570static void klp_kobj_release_patch(struct kobject *kobj)
422{ 571{
423 struct klp_patch *patch; 572 struct klp_patch *patch;
@@ -434,6 +583,12 @@ static struct kobj_type klp_ktype_patch = {
434 583
435static void klp_kobj_release_object(struct kobject *kobj) 584static void klp_kobj_release_object(struct kobject *kobj)
436{ 585{
586 struct klp_object *obj;
587
588 obj = container_of(kobj, struct klp_object, kobj);
589
590 if (obj->dynamic)
591 klp_free_object_dynamic(obj);
437} 592}
438 593
439static struct kobj_type klp_ktype_object = { 594static struct kobj_type klp_ktype_object = {
@@ -443,6 +598,12 @@ static struct kobj_type klp_ktype_object = {
443 598
444static void klp_kobj_release_func(struct kobject *kobj) 599static void klp_kobj_release_func(struct kobject *kobj)
445{ 600{
601 struct klp_func *func;
602
603 func = container_of(kobj, struct klp_func, kobj);
604
605 if (func->nop)
606 klp_free_func_nop(func);
446} 607}
447 608
448static struct kobj_type klp_ktype_func = { 609static struct kobj_type klp_ktype_func = {
@@ -452,12 +613,15 @@ static struct kobj_type klp_ktype_func = {
452 613
453static void klp_free_funcs(struct klp_object *obj) 614static void klp_free_funcs(struct klp_object *obj)
454{ 615{
455 struct klp_func *func; 616 struct klp_func *func, *tmp_func;
456 617
457 klp_for_each_func(obj, func) { 618 klp_for_each_func_safe(obj, func, tmp_func) {
458 /* Might be called from klp_init_patch() error path. */ 619 /* Might be called from klp_init_patch() error path. */
459 if (func->kobj_added) 620 if (func->kobj_added) {
460 kobject_put(&func->kobj); 621 kobject_put(&func->kobj);
622 } else if (func->nop) {
623 klp_free_func_nop(func);
624 }
461 } 625 }
462} 626}
463 627
@@ -468,20 +632,27 @@ static void klp_free_object_loaded(struct klp_object *obj)
468 632
469 obj->mod = NULL; 633 obj->mod = NULL;
470 634
471 klp_for_each_func(obj, func) 635 klp_for_each_func(obj, func) {
472 func->old_func = NULL; 636 func->old_func = NULL;
637
638 if (func->nop)
639 func->new_func = NULL;
640 }
473} 641}
474 642
475static void klp_free_objects(struct klp_patch *patch) 643static void klp_free_objects(struct klp_patch *patch)
476{ 644{
477 struct klp_object *obj; 645 struct klp_object *obj, *tmp_obj;
478 646
479 klp_for_each_object(patch, obj) { 647 klp_for_each_object_safe(patch, obj, tmp_obj) {
480 klp_free_funcs(obj); 648 klp_free_funcs(obj);
481 649
482 /* Might be called from klp_init_patch() error path. */ 650 /* Might be called from klp_init_patch() error path. */
483 if (obj->kobj_added) 651 if (obj->kobj_added) {
484 kobject_put(&obj->kobj); 652 kobject_put(&obj->kobj);
653 } else if (obj->dynamic) {
654 klp_free_object_dynamic(obj);
655 }
485 } 656 }
486} 657}
487 658
@@ -543,7 +714,14 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
543{ 714{
544 int ret; 715 int ret;
545 716
546 if (!func->old_name || !func->new_func) 717 if (!func->old_name)
718 return -EINVAL;
719
720 /*
721 * NOPs get the address later. The patched module must be loaded,
722 * see klp_init_object_loaded().
723 */
724 if (!func->new_func && !func->nop)
547 return -EINVAL; 725 return -EINVAL;
548 726
549 if (strlen(func->old_name) >= KSYM_NAME_LEN) 727 if (strlen(func->old_name) >= KSYM_NAME_LEN)
@@ -605,6 +783,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
605 return -ENOENT; 783 return -ENOENT;
606 } 784 }
607 785
786 if (func->nop)
787 func->new_func = func->old_func;
788
608 ret = kallsyms_lookup_size_offset((unsigned long)func->new_func, 789 ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
609 &func->new_size, NULL); 790 &func->new_size, NULL);
610 if (!ret) { 791 if (!ret) {
@@ -697,6 +878,12 @@ static int klp_init_patch(struct klp_patch *patch)
697 return ret; 878 return ret;
698 patch->kobj_added = true; 879 patch->kobj_added = true;
699 880
881 if (patch->replace) {
882 ret = klp_add_nops(patch);
883 if (ret)
884 return ret;
885 }
886
700 klp_for_each_object(patch, obj) { 887 klp_for_each_object(patch, obj) {
701 ret = klp_init_object(patch, obj); 888 ret = klp_init_object(patch, obj);
702 if (ret) 889 if (ret)
@@ -869,6 +1056,35 @@ err:
869EXPORT_SYMBOL_GPL(klp_enable_patch); 1056EXPORT_SYMBOL_GPL(klp_enable_patch);
870 1057
871/* 1058/*
1059 * This function removes replaced patches.
1060 *
1061 * We could be pretty aggressive here. It is called in the situation where
1062 * these structures are no longer accessible. All functions are redirected
1063 * by the klp_transition_patch. They use either a new code or they are in
1064 * the original code because of the special nop function patches.
1065 *
1066 * The only exception is when the transition was forced. In this case,
1067 * klp_ftrace_handler() might still see the replaced patch on the stack.
1068 * Fortunately, it is carefully designed to work with removed functions
1069 * thanks to RCU. We only have to keep the patches on the system. Also
1070 * this is handled transparently by patch->module_put.
1071 */
1072void klp_discard_replaced_patches(struct klp_patch *new_patch)
1073{
1074 struct klp_patch *old_patch, *tmp_patch;
1075
1076 list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches, list) {
1077 if (old_patch == new_patch)
1078 return;
1079
1080 old_patch->enabled = false;
1081 klp_unpatch_objects(old_patch);
1082 klp_free_patch_start(old_patch);
1083 schedule_work(&old_patch->free_work);
1084 }
1085}
1086
1087/*
872 * Remove parts of patches that touch a given kernel module. The list of 1088 * Remove parts of patches that touch a given kernel module. The list of
873 * patches processed might be limited. When limit is NULL, all patches 1089 * patches processed might be limited. When limit is NULL, all patches
874 * will be handled. 1090 * will be handled.
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index d4eefc520c08..f6a853adcc00 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -8,6 +8,7 @@ extern struct mutex klp_mutex;
8extern struct list_head klp_patches; 8extern struct list_head klp_patches;
9 9
10void klp_free_patch_start(struct klp_patch *patch); 10void klp_free_patch_start(struct klp_patch *patch);
11void klp_discard_replaced_patches(struct klp_patch *new_patch);
11 12
12static inline bool klp_is_object_loaded(struct klp_object *obj) 13static inline bool klp_is_object_loaded(struct klp_object *obj)
13{ 14{
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 825022d70912..0ff466ab4b5a 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -118,7 +118,15 @@ static void notrace klp_ftrace_handler(unsigned long ip,
118 } 118 }
119 } 119 }
120 120
121 /*
122 * NOPs are used to replace existing patches with original code.
123 * Do nothing! Setting pc would cause an infinite loop.
124 */
125 if (func->nop)
126 goto unlock;
127
121 klp_arch_set_pc(regs, (unsigned long)func->new_func); 128 klp_arch_set_pc(regs, (unsigned long)func->new_func);
129
122unlock: 130unlock:
123 preempt_enable_notrace(); 131 preempt_enable_notrace();
124} 132}
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index c9917a24b3a4..f4c5908a9731 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -85,6 +85,9 @@ static void klp_complete_transition(void)
85 klp_transition_patch->mod->name, 85 klp_transition_patch->mod->name,
86 klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); 86 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
87 87
88 if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
89 klp_discard_replaced_patches(klp_transition_patch);
90
88 if (klp_target_state == KLP_UNPATCHED) { 91 if (klp_target_state == KLP_UNPATCHED) {
89 /* 92 /*
90 * All tasks have transitioned to KLP_UNPATCHED so we can now 93 * All tasks have transitioned to KLP_UNPATCHED so we can now