-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
barrier is needed after writing a PTE and before using the mapping #1805
Comments
see #1796 for discussion on this |
Nice catch. A DSB seems missing here. However, as discussed in #1796, the best place for the memory barrier are at hight level. Here, it appears that a DSB is missing in the function that mapps the registered shm. In this case, I think the There is another place where registered dyn-shm are map: when a TA is invoked (and memref parameters mapped). But this code is safe against PTE sync. |
Jen's suggestion in the pull request was to put the fix in core_mmu_map_pages(). I'm going to let you guy's who know the kernel internals to sort out where the fix is needed. |
Closing, as this has been addressed in #1827. |
I found this issue when testing the dynamic shared memory patches. A mapping was created for incoming message arguments (i.e. PTE was written), followed fairly quickly by a read of the memory. It appears the read got executed before the PTE write, resulting in a data abort.
Inserting a dsb() in core_mmu_set_entry_primitive() in core/arch/arm/mm/core_mmu_lpae.c after the PTE update:
tbl[idx] = desc | pa;
...resolves the issue.
The text was updated successfully, but these errors were encountered: