Wed, Aug 7
Jun 24 2019
Reopening so it can be answered
It’s better to create a task from scratch than edit the example task. Added back in TF-A project so it is shown on the project workbook with your question.
Jun 16 2019
May 31 2019
May 28 2019
May 21 2019
Thanks guys! The dmbish() is not a huge deal. Just get a little nervous when i see barriers and don't completely understand why it is there. :)
Ah, You are right. Having taken a look at it again, yes, the SP-> SPM communication is register based and this spm_response_add() is invoked by SPM to push to a buffer within EL3 (its not a shared buffer between different ELs). I suspect the shared buffer primitives were written with shared buffer scenario in mind and the current prototype implementation does not optimize it for the case when the buffer is within EL3.
Thanks Paul, Soby.
spm_response_*() currently cannot invoked by any secure partition since the responses array is in EL3 space. Is this not the case ? or is it the expectation that the responses array will be mapped to secure EL0 some time in the future? I don't see how a secure partition can invoke spm_response_* other than through an SMC, in which case we are already in EL3 context and dont require the dmbish(), as Paul pointed. I understand your argument for sprt_queue_*, since they are invoked by EL3 and the secure partition.
Who is the lockless reader for spm_response_add() and spm_response_get()?
Apologies for the delay. I've had a look into the spm_buffers.c file and I understand your query is around the dmbish() that's performed shortly before releasing the lock in both spm_response_add() and spm_response_get().
May 13 2019
May 8 2019
Thanks. Missed the lockless reader of the queue. Who is the lockless reader for spm_response_add() and spm_response_get()?
A secure partition may be reading from the queue from CPU1 at the same time as CPU0 pushes a new request. If this happens, it is needed that the stores are seen in the same order by all observers of the system.