refactor(notifications): improve notifications state handling
After the moment a notification is set, the receiver scheduler should
get an indication that the notification is pending.
For this the SPMC triggers the SRI, that should be handled by the FF-A
driver in the NWd, which shall call FFA_NOTIFICATION_INFO_GET and
notify the receiver scheduler that the receiver needs CPU cycles.
The SRI is being handled in a way such that spurious triggering of the
SRI is mitigated. A spurious SRI would be one such that after a call
to FFA_NOTIFICATION_INFO_GET there would be no information returned.
The mitigation is implemented through a state machine for the SRI.
If a call to FFA_NOTIFICATION_INFO_GET didn't return all the information
of pending notifications, the FF-A Beta0 spec mandates that this is
indicated to the receiver scheduler through the return to
FFA_NOTIFICATION_INFO_GET call.
In a system with concurrent calls to FFA_NOTIFICATION_INFO_GET and
FFA_NOTIFICATION_SET, the current implementation could allow for
a case in which a notification is pending, and its state hasn't been
indicated to the receiver scheduler: neither through triggering the SRI,
nor through the return to FFA_NOTIFICATION_INFO_GET.
This patch introduces global counters to the state of notifications:
1- Counter to the number of notifications pending - incremented at
notification set, and decremented at notification get.
2- Counter to the number of notifications that have been retrieved by
receiver scheduler that are still pending - incremented at
notification info get, and decremented at notification get if the
notification has been retrieved by the receiver scheduler.
Change-Id: Icd2bdab7c825ba607c09f3cf54c5b413e7264c1e
Signed-off-by: J-Alves <joao.alves@arm.com>
diff --git a/src/api.c b/src/api.c
index 54bde1d..330dc72 100644
--- a/src/api.c
+++ b/src/api.c
@@ -2902,6 +2902,14 @@
ret = api_ffa_notification_get_success_return(sp_notifications,
vm_notifications, 0);
+ /*
+ * If there are no more pending notifications, change `sri_state` to
+ * handled.
+ */
+ if (vm_is_notifications_pending_count_zero()) {
+ plat_ffa_sri_state_set(HANDLED);
+ }
+
out:
vm_unlock(&receiver_locked);
@@ -2914,7 +2922,7 @@
*/
static struct ffa_value api_ffa_notification_info_get_success_return(
const uint16_t *ids, uint32_t ids_count, const uint32_t *lists_sizes,
- uint32_t lists_count, bool list_is_full)
+ uint32_t lists_count)
{
struct ffa_value ret = (struct ffa_value){.func = FFA_SUCCESS_64};
@@ -2932,8 +2940,9 @@
* - The total number of elements (i.e. total list size);
* - The number of VCPU IDs within each VM specific list.
*/
- ret.arg2 =
- list_is_full ? FFA_NOTIFICATIONS_INFO_GET_FLAG_MORE_PENDING : 0;
+ ret.arg2 = vm_notifications_pending_not_retrieved_by_scheduler()
+ ? FFA_NOTIFICATIONS_INFO_GET_FLAG_MORE_PENDING
+ : 0;
ret.arg2 |= (lists_count & FFA_NOTIFICATIONS_LISTS_COUNT_MASK)
<< FFA_NOTIFICATIONS_LISTS_COUNT_SHIFT;
@@ -2999,7 +3008,7 @@
if (!list_is_full) {
/* Grab notifications info from other world */
- list_is_full = plat_ffa_vm_notifications_info_get(
+ plat_ffa_vm_notifications_info_get(
ids, &ids_count, lists_sizes, &lists_count,
FFA_NOTIFICATIONS_INFO_GET_MAX_IDS);
}
@@ -3010,10 +3019,11 @@
result = ffa_error(FFA_NO_DATA);
} else {
result = api_ffa_notification_info_get_success_return(
- ids, ids_count, lists_sizes, lists_count, list_is_full);
- plat_ffa_sri_state_set(HANDLED);
+ ids, ids_count, lists_sizes, lists_count);
}
+ plat_ffa_sri_state_set(HANDLED);
+
return result;
}