aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJiri Kosina <jkosina@suse.cz>2018-01-31 10:33:52 -0500
committerJiri Kosina <jkosina@suse.cz>2018-01-31 10:36:38 -0500
commitd05b695c25bf0d704c74e0e1375de893531b9424 (patch)
tree793ac54576aa7f8316579aa0123a3879273193cd
parent8869016d3a58cbe7c31c70f4f008a92122b271c7 (diff)
parentd0807da78e11d46f18399cbf8c4028c731346766 (diff)
Merge branch 'for-4.16/remove-immediate' into for-linus
Pull 'immediate' feature removal from Miroslav Benes.
-rw-r--r--Documentation/livepatch/livepatch.txt89
-rw-r--r--include/linux/livepatch.h4
-rw-r--r--kernel/livepatch/core.c12
-rw-r--r--kernel/livepatch/transition.c49
-rw-r--r--samples/livepatch/livepatch-callbacks-demo.c15
-rw-r--r--samples/livepatch/livepatch-sample.c15
-rw-r--r--samples/livepatch/livepatch-shadow-fix1.c15
-rw-r--r--samples/livepatch/livepatch-shadow-fix2.c15
8 files changed, 33 insertions, 181 deletions
diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 896ba8941702..1ae2de758c08 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -72,8 +72,7 @@ example, they add a NULL pointer or a boundary check, fix a race by adding
72a missing memory barrier, or add some locking around a critical section. 72a missing memory barrier, or add some locking around a critical section.
73Most of these changes are self contained and the function presents itself 73Most of these changes are self contained and the function presents itself
74the same way to the rest of the system. In this case, the functions might 74the same way to the rest of the system. In this case, the functions might
75be updated independently one by one. (This can be done by setting the 75be updated independently one by one.
76'immediate' flag in the klp_patch struct.)
77 76
78But there are more complex fixes. For example, a patch might change 77But there are more complex fixes. For example, a patch might change
79ordering of locking in multiple functions at the same time. Or a patch 78ordering of locking in multiple functions at the same time. Or a patch
@@ -125,12 +124,6 @@ safe to patch tasks:
125 b) Patching CPU-bound user tasks. If the task is highly CPU-bound 124 b) Patching CPU-bound user tasks. If the task is highly CPU-bound
126 then it will get patched the next time it gets interrupted by an 125 then it will get patched the next time it gets interrupted by an
127 IRQ. 126 IRQ.
128 c) In the future it could be useful for applying patches for
129 architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In
130 this case you would have to signal most of the tasks on the
131 system. However this isn't supported yet because there's
132 currently no way to patch kthreads without
133 HAVE_RELIABLE_STACKTRACE.
134 127
1353. For idle "swapper" tasks, since they don't ever exit the kernel, they 1283. For idle "swapper" tasks, since they don't ever exit the kernel, they
136 instead have a klp_update_patch_state() call in the idle loop which 129 instead have a klp_update_patch_state() call in the idle loop which
@@ -138,27 +131,16 @@ safe to patch tasks:
138 131
139 (Note there's not yet such an approach for kthreads.) 132 (Note there's not yet such an approach for kthreads.)
140 133
141All the above approaches may be skipped by setting the 'immediate' flag 134Architectures which don't have HAVE_RELIABLE_STACKTRACE solely rely on
142in the 'klp_patch' struct, which will disable per-task consistency and 135the second approach. It's highly likely that some tasks may still be
143patch all tasks immediately. This can be useful if the patch doesn't 136running with an old version of the function, until that function
144change any function or data semantics. Note that, even with this flag 137returns. In this case you would have to signal the tasks. This
145set, it's possible that some tasks may still be running with an old 138especially applies to kthreads. They may not be woken up and would need
146version of the function, until that function returns. 139to be forced. See below for more information.
147 140
148There's also an 'immediate' flag in the 'klp_func' struct which allows 141Unless we can come up with another way to patch kthreads, architectures
149you to specify that certain functions in the patch can be applied 142without HAVE_RELIABLE_STACKTRACE are not considered fully supported by
150without per-task consistency. This might be useful if you want to patch 143the kernel livepatching.
151a common function like schedule(), and the function change doesn't need
152consistency but the rest of the patch does.
153
154For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user
155must set patch->immediate which causes all tasks to be patched
156immediately. This option should be used with care, only when the patch
157doesn't change any function or data semantics.
158
159In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE
160may be allowed to use per-task consistency if we can come up with
161another way to patch kthreads.
162 144
163The /sys/kernel/livepatch/<patch>/transition file shows whether a patch 145The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
164is in transition. Only a single patch (the topmost patch on the stack) 146is in transition. Only a single patch (the topmost patch on the stack)
@@ -197,6 +179,11 @@ modules is permanently disabled when the force feature is used. It cannot be
197guaranteed there is no task sleeping in such module. It implies unbounded 179guaranteed there is no task sleeping in such module. It implies unbounded
198reference count if a patch module is disabled and enabled in a loop. 180reference count if a patch module is disabled and enabled in a loop.
199 181
182Moreover, the usage of force may also affect future applications of live
183patches and cause even more harm to the system. Administrator should first
184consider to simply cancel a transition (see above). If force is used, reboot
185should be planned and no more live patches applied.
186
2003.1 Adding consistency model support to new architectures 1873.1 Adding consistency model support to new architectures
201--------------------------------------------------------- 188---------------------------------------------------------
202 189
@@ -234,13 +221,6 @@ few options:
234 a good backup option for those architectures which don't have 221 a good backup option for those architectures which don't have
235 reliable stack traces yet. 222 reliable stack traces yet.
236 223
237In the meantime, patches for such architectures can bypass the
238consistency model by setting klp_patch.immediate to true. This option
239is perfectly fine for patches which don't change the semantics of the
240patched functions. In practice, this is usable for ~90% of security
241fixes. Use of this option also means the patch can't be unloaded after
242it has been disabled.
243
244 224
2454. Livepatch module 2254. Livepatch module
246=================== 226===================
@@ -296,9 +276,6 @@ into three levels:
296 only for a particular object ( vmlinux or a kernel module ). Note that 276 only for a particular object ( vmlinux or a kernel module ). Note that
297 kallsyms allows for searching symbols according to the object name. 277 kallsyms allows for searching symbols according to the object name.
298 278
299 There's also an 'immediate' flag which, when set, patches the
300 function immediately, bypassing the consistency model safety checks.
301
302 + struct klp_object defines an array of patched functions (struct 279 + struct klp_object defines an array of patched functions (struct
303 klp_func) in the same object. Where the object is either vmlinux 280 klp_func) in the same object. Where the object is either vmlinux
304 (NULL) or a module name. 281 (NULL) or a module name.
@@ -317,9 +294,6 @@ into three levels:
317 symbols are found. The only exception are symbols from objects 294 symbols are found. The only exception are symbols from objects
318 (kernel modules) that have not been loaded yet. 295 (kernel modules) that have not been loaded yet.
319 296
320 Setting the 'immediate' flag applies the patch to all tasks
321 immediately, bypassing the consistency model safety checks.
322
323 For more details on how the patch is applied on a per-task basis, 297 For more details on how the patch is applied on a per-task basis,
324 see the "Consistency model" section. 298 see the "Consistency model" section.
325 299
@@ -334,14 +308,12 @@ section "Livepatch life-cycle" below for more details about these
334two operations. 308two operations.
335 309
336Module removal is only safe when there are no users of the underlying 310Module removal is only safe when there are no users of the underlying
337functions. The immediate consistency model is not able to detect this. The 311functions. This is the reason why the force feature permanently disables
338code just redirects the functions at the very beginning and it does not 312the removal. The forced tasks entered the functions but we cannot say
339check if the functions are in use. In other words, it knows when the 313that they returned back. Therefore it cannot be decided when the
340functions get called but it does not know when the functions return. 314livepatch module can be safely removed. When the system is successfully
341Therefore it cannot be decided when the livepatch module can be safely 315transitioned to a new patch state (patched/unpatched) without being
342removed. This is solved by a hybrid consistency model. When the system is 316forced it is guaranteed that no task sleeps or runs in the old code.
343transitioned to a new patch state (patched/unpatched) it is guaranteed that
344no task sleeps or runs in the old code.
345 317
346 318
3475. Livepatch life-cycle 3195. Livepatch life-cycle
@@ -355,19 +327,12 @@ First, the patch is applied only when all patched symbols for already
355loaded objects are found. The error handling is much easier if this 327loaded objects are found. The error handling is much easier if this
356check is done before particular functions get redirected. 328check is done before particular functions get redirected.
357 329
358Second, the immediate consistency model does not guarantee that anyone is not 330Second, it might take some time until the entire system is migrated with
359sleeping in the new code after the patch is reverted. This means that the new 331the hybrid consistency model being used. The patch revert might block
360code needs to stay around "forever". If the code is there, one could apply it 332the livepatch module removal for too long. Therefore it is useful to
361again. Therefore it makes sense to separate the operations that might be done 333revert the patch using a separate operation that might be called
362once and those that need to be repeated when the patch is enabled (applied) 334explicitly. But it does not make sense to remove all information until
363again. 335the livepatch module is really removed.
364
365Third, it might take some time until the entire system is migrated
366when a more complex consistency model is used. The patch revert might
367block the livepatch module removal for too long. Therefore it is useful
368to revert the patch using a separate operation that might be called
369explicitly. But it does not make sense to remove all information
370until the livepatch module is really removed.
371 336
372 337
3735.1. Registration 3385.1. Registration
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index fc5c1be3f6f4..4754f01c1abb 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -40,7 +40,6 @@
40 * @new_func: pointer to the patched function code 40 * @new_func: pointer to the patched function code
41 * @old_sympos: a hint indicating which symbol position the old function 41 * @old_sympos: a hint indicating which symbol position the old function
42 * can be found (optional) 42 * can be found (optional)
43 * @immediate: patch the func immediately, bypassing safety mechanisms
44 * @old_addr: the address of the function being patched 43 * @old_addr: the address of the function being patched
45 * @kobj: kobject for sysfs resources 44 * @kobj: kobject for sysfs resources
46 * @stack_node: list node for klp_ops func_stack list 45 * @stack_node: list node for klp_ops func_stack list
@@ -76,7 +75,6 @@ struct klp_func {
76 * in kallsyms for the given object is used. 75 * in kallsyms for the given object is used.
77 */ 76 */
78 unsigned long old_sympos; 77 unsigned long old_sympos;
79 bool immediate;
80 78
81 /* internal */ 79 /* internal */
82 unsigned long old_addr; 80 unsigned long old_addr;
@@ -137,7 +135,6 @@ struct klp_object {
137 * struct klp_patch - patch structure for live patching 135 * struct klp_patch - patch structure for live patching
138 * @mod: reference to the live patch module 136 * @mod: reference to the live patch module
139 * @objs: object entries for kernel objects to be patched 137 * @objs: object entries for kernel objects to be patched
140 * @immediate: patch all funcs immediately, bypassing safety mechanisms
141 * @list: list node for global list of registered patches 138 * @list: list node for global list of registered patches
142 * @kobj: kobject for sysfs resources 139 * @kobj: kobject for sysfs resources
143 * @enabled: the patch is enabled (but operation may be incomplete) 140 * @enabled: the patch is enabled (but operation may be incomplete)
@@ -147,7 +144,6 @@ struct klp_patch {
147 /* external */ 144 /* external */
148 struct module *mod; 145 struct module *mod;
149 struct klp_object *objs; 146 struct klp_object *objs;
150 bool immediate;
151 147
152 /* internal */ 148 /* internal */
153 struct list_head list; 149 struct list_head list;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 8fd8e8f126da..3a4656fb7047 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -366,11 +366,6 @@ static int __klp_enable_patch(struct klp_patch *patch)
366 /* 366 /*
367 * A reference is taken on the patch module to prevent it from being 367 * A reference is taken on the patch module to prevent it from being
368 * unloaded. 368 * unloaded.
369 *
370 * Note: For immediate (no consistency model) patches we don't allow
371 * patch modules to unload since there is no safe/sane method to
372 * determine if a thread is still running in the patched code contained
373 * in the patch module once the ftrace registration is successful.
374 */ 369 */
375 if (!try_module_get(patch->mod)) 370 if (!try_module_get(patch->mod))
376 return -ENODEV; 371 return -ENODEV;
@@ -894,12 +889,7 @@ int klp_register_patch(struct klp_patch *patch)
894 if (!klp_initialized()) 889 if (!klp_initialized())
895 return -ENODEV; 890 return -ENODEV;
896 891
897 /* 892 if (!klp_have_reliable_stack()) {
898 * Architectures without reliable stack traces have to set
899 * patch->immediate because there's currently no way to patch kthreads
900 * with the consistency model.
901 */
902 if (!klp_have_reliable_stack() && !patch->immediate) {
903 pr_err("This architecture doesn't have support for the livepatch consistency model.\n"); 893 pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
904 return -ENOSYS; 894 return -ENOSYS;
905 } 895 }
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index be5bfa533ee8..7c6631e693bc 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -82,7 +82,6 @@ static void klp_complete_transition(void)
82 struct klp_func *func; 82 struct klp_func *func;
83 struct task_struct *g, *task; 83 struct task_struct *g, *task;
84 unsigned int cpu; 84 unsigned int cpu;
85 bool immediate_func = false;
86 85
87 pr_debug("'%s': completing %s transition\n", 86 pr_debug("'%s': completing %s transition\n",
88 klp_transition_patch->mod->name, 87 klp_transition_patch->mod->name,
@@ -104,16 +103,9 @@ static void klp_complete_transition(void)
104 klp_synchronize_transition(); 103 klp_synchronize_transition();
105 } 104 }
106 105
107 if (klp_transition_patch->immediate) 106 klp_for_each_object(klp_transition_patch, obj)
108 goto done; 107 klp_for_each_func(obj, func)
109
110 klp_for_each_object(klp_transition_patch, obj) {
111 klp_for_each_func(obj, func) {
112 func->transition = false; 108 func->transition = false;
113 if (func->immediate)
114 immediate_func = true;
115 }
116 }
117 109
118 /* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */ 110 /* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
119 if (klp_target_state == KLP_PATCHED) 111 if (klp_target_state == KLP_PATCHED)
@@ -132,7 +124,6 @@ static void klp_complete_transition(void)
132 task->patch_state = KLP_UNDEFINED; 124 task->patch_state = KLP_UNDEFINED;
133 } 125 }
134 126
135done:
136 klp_for_each_object(klp_transition_patch, obj) { 127 klp_for_each_object(klp_transition_patch, obj) {
137 if (!klp_is_object_loaded(obj)) 128 if (!klp_is_object_loaded(obj))
138 continue; 129 continue;
@@ -146,16 +137,11 @@ done:
146 klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); 137 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
147 138
148 /* 139 /*
149 * See complementary comment in __klp_enable_patch() for why we 140 * klp_forced set implies unbounded increase of module's ref count if
150 * keep the module reference for immediate patches. 141 * the module is disabled/enabled in a loop.
151 *
152 * klp_forced or immediate_func set implies unbounded increase of
153 * module's ref count if the module is disabled/enabled in a loop.
154 */ 142 */
155 if (!klp_forced && !klp_transition_patch->immediate && 143 if (!klp_forced && klp_target_state == KLP_UNPATCHED)
156 !immediate_func && klp_target_state == KLP_UNPATCHED) {
157 module_put(klp_transition_patch->mod); 144 module_put(klp_transition_patch->mod);
158 }
159 145
160 klp_target_state = KLP_UNDEFINED; 146 klp_target_state = KLP_UNDEFINED;
161 klp_transition_patch = NULL; 147 klp_transition_patch = NULL;
@@ -223,9 +209,6 @@ static int klp_check_stack_func(struct klp_func *func,
223 struct klp_ops *ops; 209 struct klp_ops *ops;
224 int i; 210 int i;
225 211
226 if (func->immediate)
227 return 0;
228
229 for (i = 0; i < trace->nr_entries; i++) { 212 for (i = 0; i < trace->nr_entries; i++) {
230 address = trace->entries[i]; 213 address = trace->entries[i];
231 214
@@ -388,13 +371,6 @@ void klp_try_complete_transition(void)
388 WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED); 371 WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
389 372
390 /* 373 /*
391 * If the patch can be applied or reverted immediately, skip the
392 * per-task transitions.
393 */
394 if (klp_transition_patch->immediate)
395 goto success;
396
397 /*
398 * Try to switch the tasks to the target patch state by walking their 374 * Try to switch the tasks to the target patch state by walking their
399 * stacks and looking for any to-be-patched or to-be-unpatched 375 * stacks and looking for any to-be-patched or to-be-unpatched
400 * functions. If such functions are found on a stack, or if the stack 376 * functions. If such functions are found on a stack, or if the stack
@@ -437,7 +413,6 @@ void klp_try_complete_transition(void)
437 return; 413 return;
438 } 414 }
439 415
440success:
441 /* we're done, now cleanup the data structures */ 416 /* we're done, now cleanup the data structures */
442 klp_complete_transition(); 417 klp_complete_transition();
443} 418}
@@ -458,13 +433,6 @@ void klp_start_transition(void)
458 klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); 433 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
459 434
460 /* 435 /*
461 * If the patch can be applied or reverted immediately, skip the
462 * per-task transitions.
463 */
464 if (klp_transition_patch->immediate)
465 return;
466
467 /*
468 * Mark all normal tasks as needing a patch state update. They'll 436 * Mark all normal tasks as needing a patch state update. They'll
469 * switch either in klp_try_complete_transition() or as they exit the 437 * switch either in klp_try_complete_transition() or as they exit the
470 * kernel. 438 * kernel.
@@ -514,13 +482,6 @@ void klp_init_transition(struct klp_patch *patch, int state)
514 klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); 482 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
515 483
516 /* 484 /*
517 * If the patch can be applied or reverted immediately, skip the
518 * per-task transitions.
519 */
520 if (patch->immediate)
521 return;
522
523 /*
524 * Initialize all tasks to the initial patch state to prepare them for 485 * Initialize all tasks to the initial patch state to prepare them for
525 * switching to the target state. 486 * switching to the target state.
526 */ 487 */
diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c
index 3d115bd68442..72f9e6d1387b 100644
--- a/samples/livepatch/livepatch-callbacks-demo.c
+++ b/samples/livepatch/livepatch-callbacks-demo.c
@@ -197,21 +197,6 @@ static int livepatch_callbacks_demo_init(void)
197{ 197{
198 int ret; 198 int ret;
199 199
200 if (!klp_have_reliable_stack() && !patch.immediate) {
201 /*
202 * WARNING: Be very careful when using 'patch.immediate' in
203 * your patches. It's ok to use it for simple patches like
204 * this, but for more complex patches which change function
205 * semantics, locking semantics, or data structures, it may not
206 * be safe. Use of this option will also prevent removal of
207 * the patch.
208 *
209 * See Documentation/livepatch/livepatch.txt for more details.
210 */
211 patch.immediate = true;
212 pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
213 }
214
215 ret = klp_register_patch(&patch); 200 ret = klp_register_patch(&patch);
216 if (ret) 201 if (ret)
217 return ret; 202 return ret;
diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c
index 84795223f15f..2d554dd930e2 100644
--- a/samples/livepatch/livepatch-sample.c
+++ b/samples/livepatch/livepatch-sample.c
@@ -71,21 +71,6 @@ static int livepatch_init(void)
71{ 71{
72 int ret; 72 int ret;
73 73
74 if (!klp_have_reliable_stack() && !patch.immediate) {
75 /*
76 * WARNING: Be very careful when using 'patch.immediate' in
77 * your patches. It's ok to use it for simple patches like
78 * this, but for more complex patches which change function
79 * semantics, locking semantics, or data structures, it may not
80 * be safe. Use of this option will also prevent removal of
81 * the patch.
82 *
83 * See Documentation/livepatch/livepatch.txt for more details.
84 */
85 patch.immediate = true;
86 pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
87 }
88
89 ret = klp_register_patch(&patch); 74 ret = klp_register_patch(&patch);
90 if (ret) 75 if (ret)
91 return ret; 76 return ret;
diff --git a/samples/livepatch/livepatch-shadow-fix1.c b/samples/livepatch/livepatch-shadow-fix1.c
index fbe0a1f3d99b..830c55514f9f 100644
--- a/samples/livepatch/livepatch-shadow-fix1.c
+++ b/samples/livepatch/livepatch-shadow-fix1.c
@@ -133,21 +133,6 @@ static int livepatch_shadow_fix1_init(void)
133{ 133{
134 int ret; 134 int ret;
135 135
136 if (!klp_have_reliable_stack() && !patch.immediate) {
137 /*
138 * WARNING: Be very careful when using 'patch.immediate' in
139 * your patches. It's ok to use it for simple patches like
140 * this, but for more complex patches which change function
141 * semantics, locking semantics, or data structures, it may not
142 * be safe. Use of this option will also prevent removal of
143 * the patch.
144 *
145 * See Documentation/livepatch/livepatch.txt for more details.
146 */
147 patch.immediate = true;
148 pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
149 }
150
151 ret = klp_register_patch(&patch); 136 ret = klp_register_patch(&patch);
152 if (ret) 137 if (ret)
153 return ret; 138 return ret;
diff --git a/samples/livepatch/livepatch-shadow-fix2.c b/samples/livepatch/livepatch-shadow-fix2.c
index 53c1794bdc5f..ff9948f0ec00 100644
--- a/samples/livepatch/livepatch-shadow-fix2.c
+++ b/samples/livepatch/livepatch-shadow-fix2.c
@@ -128,21 +128,6 @@ static int livepatch_shadow_fix2_init(void)
128{ 128{
129 int ret; 129 int ret;
130 130
131 if (!klp_have_reliable_stack() && !patch.immediate) {
132 /*
133 * WARNING: Be very careful when using 'patch.immediate' in
134 * your patches. It's ok to use it for simple patches like
135 * this, but for more complex patches which change function
136 * semantics, locking semantics, or data structures, it may not
137 * be safe. Use of this option will also prevent removal of
138 * the patch.
139 *
140 * See Documentation/livepatch/livepatch.txt for more details.
141 */
142 patch.immediate = true;
143 pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
144 }
145
146 ret = klp_register_patch(&patch); 131 ret = klp_register_patch(&patch);
147 if (ret) 132 if (ret)
148 return ret; 133 return ret;