Skip to content

Commit f990114

Browse files
davidhildenbrandtorvalds
authored andcommitted
mm,memory_hotplug: factor out adjusting present pages into adjust_present_page_count()
Let's have a single place (inspired by adjust_managed_page_count()) where we adjust present pages. In contrast to adjust_managed_page_count(), only memory onlining or offlining is allowed to modify the number of present pages. Link: https://lkml.kernel.org/r/20210421102701.25051-4-osalvador@suse.de Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Oscar Salvador <osalvador@suse.de> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent dd8e2f2 commit f990114

File tree

1 file changed

+12
-10
lines changed

1 file changed

+12
-10
lines changed

mm/memory_hotplug.c

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -829,6 +829,16 @@ struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
829829
return default_zone_for_pfn(nid, start_pfn, nr_pages);
830830
}
831831

832+
static void adjust_present_page_count(struct zone *zone, long nr_pages)
833+
{
834+
unsigned long flags;
835+
836+
zone->present_pages += nr_pages;
837+
pgdat_resize_lock(zone->zone_pgdat, &flags);
838+
zone->zone_pgdat->node_present_pages += nr_pages;
839+
pgdat_resize_unlock(zone->zone_pgdat, &flags);
840+
}
841+
832842
int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
833843
int online_type, int nid)
834844
{
@@ -884,11 +894,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
884894
}
885895

886896
online_pages_range(pfn, nr_pages);
887-
zone->present_pages += nr_pages;
888-
889-
pgdat_resize_lock(zone->zone_pgdat, &flags);
890-
zone->zone_pgdat->node_present_pages += nr_pages;
891-
pgdat_resize_unlock(zone->zone_pgdat, &flags);
897+
adjust_present_page_count(zone, nr_pages);
892898

893899
node_states_set_node(nid, &arg);
894900
if (need_zonelists_rebuild)
@@ -1706,11 +1712,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
17061712

17071713
/* removal success */
17081714
adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
1709-
zone->present_pages -= nr_pages;
1710-
1711-
pgdat_resize_lock(zone->zone_pgdat, &flags);
1712-
zone->zone_pgdat->node_present_pages -= nr_pages;
1713-
pgdat_resize_unlock(zone->zone_pgdat, &flags);
1715+
adjust_present_page_count(zone, -nr_pages);
17141716

17151717
init_per_zone_wmark_min();
17161718

0 commit comments

Comments
 (0)