mm/truncate: fix truncation for pages of arbitrary size

Remove the assumption that a compound page is HPAGE_PMD_SIZE, and the
assumption that any page is PAGE_SIZE.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-10-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Matthew Wilcox (Oracle) 2020-10-15 20:05:50 -07:00 committed by Linus Torvalds
parent 5eaf35ab12
commit fc3a5ac528

View file

@ -168,7 +168,7 @@ void do_invalidatepage(struct page *page, unsigned int offset,
* becomes orphaned. It will be left on the LRU and may even be mapped into * becomes orphaned. It will be left on the LRU and may even be mapped into
* user pagetables if we're racing with filemap_fault(). * user pagetables if we're racing with filemap_fault().
* *
* We need to bale out if page->mapping is no longer equal to the original * We need to bail out if page->mapping is no longer equal to the original
* mapping. This happens a) when the VM reclaimed the page while we waited on * mapping. This happens a) when the VM reclaimed the page while we waited on
* its lock, b) when a concurrent invalidate_mapping_pages got there first and * its lock, b) when a concurrent invalidate_mapping_pages got there first and
* c) when tmpfs swizzles a page between a tmpfs inode and swapper_space. * c) when tmpfs swizzles a page between a tmpfs inode and swapper_space.
@ -177,12 +177,12 @@ static void
truncate_cleanup_page(struct address_space *mapping, struct page *page) truncate_cleanup_page(struct address_space *mapping, struct page *page)
{ {
if (page_mapped(page)) { if (page_mapped(page)) {
pgoff_t nr = PageTransHuge(page) ? HPAGE_PMD_NR : 1; unsigned int nr = thp_nr_pages(page);
unmap_mapping_pages(mapping, page->index, nr, false); unmap_mapping_pages(mapping, page->index, nr, false);
} }
if (page_has_private(page)) if (page_has_private(page))
do_invalidatepage(page, 0, PAGE_SIZE); do_invalidatepage(page, 0, thp_size(page));
/* /*
* Some filesystems seem to re-dirty the page even after * Some filesystems seem to re-dirty the page even after