mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap()

Let's simplify and reduce code indentation.  In the RMAP_LEVEL_PTE case,
we already check for nr when computing partially_mapped.

For RMAP_LEVEL_PMD, it's a bit more confusing.  Likely, we don't need the
"nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled into
the "/* Raced ahead of another remove and an add?  */" case.  So let's
simply move the nr check in there.

Note that partially_mapped is always false for small folios.

No functional change intended.

Link: https://lkml.kernel.org/r/20240710214350.147864-1-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
David Hildenbrand 2024-07-10 23:43:50 +02:00 committed by Andrew Morton
parent 94ccd21e9a
commit 6654d28995

View file

@ -1568,22 +1568,19 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
}
}
partially_mapped = nr < nr_pmdmapped;
partially_mapped = nr && nr < nr_pmdmapped;
break;
}
if (nr) {
/*
* Queue anon large folio for deferred split if at least one
* page of the folio is unmapped and at least one page
* is still mapped.
*
* Check partially_mapped first to ensure it is a large folio.
*/
if (folio_test_anon(folio) && partially_mapped &&
list_empty(&folio->_deferred_list))
deferred_split_folio(folio);
}
/*
* Queue anon large folio for deferred split if at least one page of
* the folio is unmapped and at least one page is still mapped.
*
* Check partially_mapped first to ensure it is a large folio.
*/
if (partially_mapped && folio_test_anon(folio) &&
list_empty(&folio->_deferred_list))
deferred_split_folio(folio);
__folio_mod_stat(folio, -nr, -nr_pmdmapped);
/*