Skip to content

Commit 3a151ff

Browse files
rostedtksacilotto
authored andcommitted
fgraph: Initialize tracing_graph_pause at task creation
BugLink: https://bugs.launchpad.net/bugs/1916066 commit 7e0a922 upstream. On some archs, the idle task can call into cpu_suspend(). The cpu_suspend() will disable or pause function graph tracing, as there's some paths in bringing down the CPU that can have issues with its return address being modified. The task_struct structure has a "tracing_graph_pause" atomic counter, that when set to something other than zero, the function graph tracer will not modify the return address. The problem is that the tracing_graph_pause counter is initialized when the function graph tracer is enabled. This can corrupt the counter for the idle task if it is suspended in these architectures. CPU 1 CPU 2 ----- ----- do_idle() cpu_suspend() pause_graph_tracing() task_struct->tracing_graph_pause++ (0 -> 1) start_graph_tracing() for_each_online_cpu(cpu) { ftrace_graph_init_idle_task(cpu) task-struct->tracing_graph_pause = 0 (1 -> 0) unpause_graph_tracing() task_struct->tracing_graph_pause-- (0 -> -1) The above should have gone from 1 to zero, and enabled function graph tracing again. But instead, it is set to -1, which keeps it disabled. There's no reason that the field tracing_graph_pause on the task_struct can not be initialized at boot up. Cc: stable@vger.kernel.org Fixes: 380c4b1 ("tracing/function-graph-tracer: append the tracing_graph_flag") Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=211339 Reported-by: pierre.gondois@arm.com Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
1 parent 8718b7b commit 3a151ff

File tree

2 files changed

+2
-3
lines changed

2 files changed

+2
-3
lines changed

init/init_task.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,8 @@ struct task_struct init_task
171171
.lockdep_recursion = 0,
172172
#endif
173173
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
174-
.ret_stack = NULL,
174+
.ret_stack = NULL,
175+
.tracing_graph_pause = ATOMIC_INIT(0),
175176
#endif
176177
#if defined(CONFIG_TRACING) && defined(CONFIG_PREEMPTION)
177178
.trace_recursion = 0,

kernel/trace/fgraph.c

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,6 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
367367
}
368368

369369
if (t->ret_stack == NULL) {
370-
atomic_set(&t->tracing_graph_pause, 0);
371370
atomic_set(&t->trace_overrun, 0);
372371
t->curr_ret_stack = -1;
373372
t->curr_ret_depth = -1;
@@ -462,7 +461,6 @@ static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack);
462461
static void
463462
graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack)
464463
{
465-
atomic_set(&t->tracing_graph_pause, 0);
466464
atomic_set(&t->trace_overrun, 0);
467465
t->ftrace_timestamp = 0;
468466
/* make curr_ret_stack visible before we add the ret_stack */

0 commit comments

Comments
 (0)