Skip to content

Commit 31383c6

Browse files
djbwtorvalds
authored andcommitted
mm, hugetlbfs: introduce ->split() to vm_operations_struct
Patch series "device-dax: fix unaligned munmap handling" When device-dax is operating in huge-page mode we want it to behave like hugetlbfs and fail attempts to split vmas into unaligned ranges. It would be messy to teach the munmap path about device-dax alignment constraints in the same (hstate) way that hugetlbfs communicates this constraint. Instead, these patches introduce a new ->split() vm operation. This patch (of 2): The device-dax interface has similar constraints as hugetlbfs in that it requires the munmap path to unmap in huge page aligned units. Rather than add more custom vma handling code in __split_vma() introduce a new vm operation to perform this vma specific check. Link: http://lkml.kernel.org/r/151130418135.4029.6783191281930729710.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: dee4107 ("/dev/dax, core: file operations and dax-mmap") Signed-off-by: Dan Williams <dan.j.williams@intel.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 95a8798 commit 31383c6

File tree

3 files changed

+14
-3
lines changed

3 files changed

+14
-3
lines changed

include/linux/mm.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -377,6 +377,7 @@ enum page_entry_size {
377377
struct vm_operations_struct {
378378
void (*open)(struct vm_area_struct * area);
379379
void (*close)(struct vm_area_struct * area);
380+
int (*split)(struct vm_area_struct * area, unsigned long addr);
380381
int (*mremap)(struct vm_area_struct * area);
381382
int (*fault)(struct vm_fault *vmf);
382383
int (*huge_fault)(struct vm_fault *vmf, enum page_entry_size pe_size);

mm/hugetlb.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3125,6 +3125,13 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
31253125
}
31263126
}
31273127

3128+
static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
3129+
{
3130+
if (addr & ~(huge_page_mask(hstate_vma(vma))))
3131+
return -EINVAL;
3132+
return 0;
3133+
}
3134+
31283135
/*
31293136
* We cannot handle pagefaults against hugetlb pages at all. They cause
31303137
* handle_mm_fault() to try to instantiate regular-sized pages in the
@@ -3141,6 +3148,7 @@ const struct vm_operations_struct hugetlb_vm_ops = {
31413148
.fault = hugetlb_vm_op_fault,
31423149
.open = hugetlb_vm_op_open,
31433150
.close = hugetlb_vm_op_close,
3151+
.split = hugetlb_vm_op_split,
31443152
};
31453153

31463154
static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,

mm/mmap.c

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2555,9 +2555,11 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
25552555
struct vm_area_struct *new;
25562556
int err;
25572557

2558-
if (is_vm_hugetlb_page(vma) && (addr &
2559-
~(huge_page_mask(hstate_vma(vma)))))
2560-
return -EINVAL;
2558+
if (vma->vm_ops && vma->vm_ops->split) {
2559+
err = vma->vm_ops->split(vma, addr);
2560+
if (err)
2561+
return err;
2562+
}
25612563

25622564
new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
25632565
if (!new)

0 commit comments

Comments
 (0)