Page MenuHomePhabricator

Bug in rdmem_attr_to_mmap_attr in spm_xlat.c
Closed, WontfixPublic


rdmem_attr_to_mmap_attr() maps from RD_MEM* to mmap attributes. For the RD_MEM_NORMAL_CLIENT_SHARED_MEM constant, the mmap attributes are MT_MEMORY | MT_RW | MT_SECURE. This should really be MT_NS since it is the shared non secure buffer. If not, accesses to these memory locations could cause coherency issues.
I understand there will likely be significant rework based on specs newer than alpha 1 but would at least like confirmation that this is a bug to ensure i'm not missing something.

Event Timeline

raghuncstate created this object with visibility "All Users".
soby-mathew changed the visibility from "All Users" to "Public (No Login Required)".May 8 2019, 9:45 AM

Hello. Thanks for your report.

You're correct that the RD_MEM_NORMAL_CLIENT_SHARED_MEM attribute should map to MT_NS instead of MT_SECURE. The reason that MT_SECURE is used at the moment is because the backing memory allocated through an object pool that is stored in a secure region of DRAM. You can follow the call from map_rdmem(), through spm_alloc_heap() to pool_alloc_n(), where the PLAT_SPM_HEAP_BASE address is used as the base address for the object pool.

Right now, changing the attribute to MT_NS would cause a secure partition attempting to access that memory to crash, so there is no quick fix for this. The long-term solution is for any non-secure buffers to be allocated in non-secure memory. This would require a full implementation of the SPCI commands that exchange information about the location of buffers owned by both normal world and secure world entities; we will look to implement these in a later version of the prototype SPM implementation.

Thanks for taking a look and providing confirmation! :)

pbeesley-arm closed this task as Wontfix.May 13 2019, 9:41 AM
pbeesley-arm triaged this task as Low priority.

Thanks for taking a look and providing confirmation! :)

I'll close this as wontfix because there's no specific work that we'll do to address this issue in isolation. However, as mentioned, we will still come back and resolve this as part of some wider changes.