Skip to content

Commit fc7b201

Browse files
ryanhrobgregkh
authored andcommitted
sparc/mm: disable preemption in lazy mmu mode
commit a1d416b upstream. Since commit 38e0edb ("mm/apply_to_range: call pte function with lazy updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode() to be called without holding a page table lock (for the kernel mappings case), and therefore it is possible that preemption may occur while in the lazy mmu mode. The Sparc lazy mmu implementation is not robust to preemption since it stores the lazy mode state in a per-cpu structure and does not attempt to manage that state on task switch. Powerpc had the same issue and fixed it by explicitly disabling preemption in arch_enter_lazy_mmu_mode() and re-enabling in arch_leave_lazy_mmu_mode(). See commit b9ef323 ("powerpc/64s: Disable preemption in hash lazy mmu mode"). Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the same way here. Link: https://lkml.kernel.org/r/20250303141542.3371656-4-ryan.roberts@arm.com Fixes: 38e0edb ("mm/apply_to_range: call pte function with lazy updates") Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Andreas Larsson <andreas@gaisler.com> Acked-by: Juergen Gross <jgross@suse.com> Cc: Borislav Betkov <bp@alien8.de> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Juegren Gross <jgross@suse.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 60fdb9e commit fc7b201

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

arch/sparc/mm/tlb.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,8 +52,10 @@ void flush_tlb_pending(void)
5252

5353
void arch_enter_lazy_mmu_mode(void)
5454
{
55-
struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
55+
struct tlb_batch *tb;
5656

57+
preempt_disable();
58+
tb = this_cpu_ptr(&tlb_batch);
5759
tb->active = 1;
5860
}
5961

@@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
6466
if (tb->tlb_nr)
6567
flush_tlb_pending();
6668
tb->active = 0;
69+
preempt_enable();
6770
}
6871

6972
static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,

0 commit comments

Comments
 (0)