Merge changes from topic "mp/simd_ctxt_mgmt" into integration
* changes:
feat(fvp): allow SIMD context to be put in TZC DRAM
docs(simd): introduce CTX_INCLUDE_SVE_REGS build flag
feat(fvp): add Cactus partition manifest for EL3 SPMC
chore(simd): remove unused macros and utilities for FP
feat(el3-spmc): support simd context management upon world switch
feat(trusty): switch to simd_ctx_save/restore apis
feat(pncd): switch to simd_ctx_save/restore apis
feat(spm-mm): switch to simd_ctx_save/restore APIs
feat(simd): add rules to rationalize simd ctxt mgmt
feat(simd): introduce simd context helper APIs
feat(simd): add routines to save, restore sve state
feat(simd): add sve state to simd ctxt struct
feat(simd): add data struct for simd ctxt management
diff --git a/bl31/bl31.mk b/bl31/bl31.mk
index 2f1215c..7dc71a2 100644
--- a/bl31/bl31.mk
+++ b/bl31/bl31.mk
@@ -42,6 +42,7 @@
bl31/bl31_context_mgmt.c \
bl31/bl31_traps.c \
common/runtime_svc.c \
+ lib/cpus/errata_common.c \
lib/cpus/aarch64/dsu_helpers.S \
plat/common/aarch64/platform_mp_stack.S \
services/arm_arch_svc/arm_arch_svc_setup.c \
diff --git a/docs/components/context-management-library.rst b/docs/components/context-management-library.rst
index 56ba2ec..266b82a 100644
--- a/docs/components/context-management-library.rst
+++ b/docs/components/context-management-library.rst
@@ -98,14 +98,15 @@
4. **Dynamic discovery of Feature enablement by EL3**
-TF-A supports three states for feature enablement at EL3, to make them available
+TF-A supports four states for feature enablement at EL3, to make them available
for lower exception levels.
.. code:: c
- #define FEAT_STATE_DISABLED 0
- #define FEAT_STATE_ENABLED 1
- #define FEAT_STATE_CHECK 2
+ #define FEAT_STATE_DISABLED 0
+ #define FEAT_STATE_ENABLED 1
+ #define FEAT_STATE_CHECK 2
+ #define FEAT_STATE_CHECK_ASYMMETRIC 3
A pattern is established for feature enablement behavior.
Each feature must support the 3 possible values with rigid semantics.
@@ -119,7 +120,26 @@
- **FEAT_STATE_CHECK** - same as ``FEAT_STATE_ALWAYS`` except that the feature's
existence will be checked at runtime. Default on dynamic platforms (example: FVP).
-.. note::
+- **FEAT_STATE_CHECK_ASYMMETRIC** - same as ``FEAT_STATE_CHECK`` except that the feature's
+ existence is asymmetric across cores, which requires the feature existence is checked
+ during warmboot path also. Note that only limited number of features can be asymmetric.
+
+ .. note::
+ Only limited number of features can be ``FEAT_STATE_CHECK_ASYMMETRIC`` this is due to
+ the fact that Operating systems are designed for SMP systems.
+ There are no clear guidelines what kind of mismatch is allowed but following pointers
+ can help making a decision
+
+ - All mandatory features must be symmetric.
+ - Any feature that impacts the generation of page tables must be symmetric.
+ - Any feature access which does not trap to EL3 should be symmetric.
+ - Features related with profiling, debug and trace could be asymmetric
+ - Migration of vCPU/tasks between CPUs should not cause an error
+
+ Whenever there is asymmetric feature support is added for a feature TF-A need to add
+ feature specific code in context management code.
+
+ .. note::
``FEAT_RAS`` is an exception here, as it impacts the execution of EL3 and
it is essential to know its presence at compile time. Refer to ``ENABLE_FEAT``
macro under :ref:`Build Options` section for more details.
@@ -498,4 +518,4 @@
.. |Context Init WarmBoot| image:: ../resources/diagrams/context_init_warmboot.png
.. _Trustzone for AArch64: https://developer.arm.com/documentation/102418/0101/TrustZone-in-the-processor/Switching-between-Security-states
.. _Security States with RME: https://developer.arm.com/documentation/den0126/0100/Security-states
-.. _lib/el3_runtime/(aarch32/aarch64): https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/lib/el3_runtime
\ No newline at end of file
+.. _lib/el3_runtime/(aarch32/aarch64): https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/lib/el3_runtime
diff --git a/docs/components/romlib-design.rst b/docs/components/romlib-design.rst
index 62c173a..c0f3ed3 100644
--- a/docs/components/romlib-design.rst
+++ b/docs/components/romlib-design.rst
@@ -71,6 +71,15 @@
The "library at ROM" contains a necessary init function that initialises the
global variables defined by the functions inside "library at ROM".
+Wrapper functions are specified at the link stage of compilation and cannot
+interpose uppon functions within the same translation unit. For example, if
+function ``fn_a`` calls ``fn_b`` within translation unit ``functions.c`` and
+the romlib jump table includes an entry for ``fn_b``, ``fn_a`` will include
+a reference to ``fn_b``'s original program text instead of the wrapper. Thus
+the jumptable author must take care to include public entry points into
+translation units to avoid paying the program text cost twice, once in the
+original executable and once in romlib.
+
Script
~~~~~~
@@ -86,7 +95,7 @@
3. ``romlib_generator.py genwrappers [args]`` - Generates a wrapper function for
each entry in the index file except for the ones that contain the keyword
- ``patch``. The generated wrapper file is called ``<fn_name>.s``.
+ ``patch``. The generated wrapper file is called ``wrappers.s``.
4. ``romlib_generator.py pre [args]`` - Preprocesses the index file which means
it resolves all the include commands in the file recursively. It can also
diff --git a/docs/design/cpu-specific-build-macros.rst b/docs/design/cpu-specific-build-macros.rst
index 7af2eae..0cdcc20 100644
--- a/docs/design/cpu-specific-build-macros.rst
+++ b/docs/design/cpu-specific-build-macros.rst
@@ -826,6 +826,10 @@
feature is enabled and can assist the Kernel in the process of
mitigation of the erratum.
+- ``ERRATA_X4_2726228``: This applies erratum 2726228 workaround to Cortex-X4
+ CPU. This needs to be enabled for revisions r0p0 and r0p1. It is fixed in
+ r0p2.
+
- ``ERRATA_X4_2740089``: This applies errata 2740089 workaround to Cortex-X4
CPU. This needs to be enabled for revisions r0p0 and r0p1. It is fixed
in r0p2.
@@ -899,6 +903,10 @@
Cortex-A520 CPU. This needs to be enabled for revisions r0p0 and r0p1.
It is still open.
+- ``ERRATA_A520_2938996``: This applies errata 2938996 workaround to
+ Cortex-A520 CPU. This needs to be enabled for revisions r0p0 and r0p1.
+ It is fixed in r0p2.
+
For Cortex-A715, the following errata build flags are defined :
- ``ERRATA_A715_2331818``: This applies errata 2331818 workaround to
diff --git a/drivers/nxp/console/linflex_console.S b/drivers/nxp/console/linflex_console.S
index abcbb59..d8c10ef 100644
--- a/drivers/nxp/console/linflex_console.S
+++ b/drivers/nxp/console/linflex_console.S
@@ -18,6 +18,7 @@
#define LINFLEX_LINSR (0x8)
#define LINSR_LINS_INITMODE (0x00001000)
+#define LINSR_LINS_RX_TX_MODE (0x00008000)
#define LINSR_LINS_MASK (0x0000F000)
#define LINFLEX_UARTCR (0x10)
@@ -48,9 +49,11 @@
*/
.globl console_linflex_core_init
.globl console_linflex_core_putc
+.globl console_linflex_core_flush
.globl console_linflex_register
.globl console_linflex_putc
+.globl console_linflex_flush
/**
* uint32_t get_ldiv_mult(uintptr_t baseaddr, uint32_t clock,
@@ -175,10 +178,29 @@
str x0, [x3, #CONSOLE_T_BASE]
mov x0, x3
- finish_console_register linflex, putc=1, getc=0, flush=0
+ finish_console_register linflex, putc=1, getc=0, flush=1
endfunc console_linflex_register
/**
+ * int console_linflex_core_flush(uintptr_t baseaddr);
+ *
+ * Loop while the TX fifo is not empty, depending on the selected UART mode.
+ *
+ * In: x0 - Linflex base address
+ * Clobber list : x0 - x1
+ */
+func console_linflex_core_flush
+wait_rx_tx:
+ ldr w1, [x0, LINFLEX_LINSR]
+ and w1, w1, #LINSR_LINS_MASK
+ cmp w1, #LINSR_LINS_RX_TX_MODE
+ b.eq wait_rx_tx
+
+ mov x0, #0
+ ret
+endfunc console_linflex_core_flush
+
+/**
* int console_linflex_core_putc(int c, uintptr_t baseaddr);
* Out: w0 - printed character on success, < 0 on error.
@@ -257,3 +279,21 @@
mov x0, #-EINVAL
ret
endfunc console_linflex_putc
+
+/**
+ * int console_linflex_flush(console_t *console);
+ *
+ * Function to wait for the TX FIFO to be cleared.
+ * In : x0 - pointer to console_t struct
+ * Out: x0 - return -1 on error else return 0.
+ * Clobber list : x0 - x1
+ */
+func console_linflex_flush
+ cbz x0, flush_error
+ ldr x0, [x0, #CONSOLE_T_BASE]
+
+ b console_linflex_core_flush
+flush_error:
+ mov x0, #-EINVAL
+ ret
+endfunc console_linflex_flush
diff --git a/include/arch/aarch64/arch.h b/include/arch/aarch64/arch.h
index 52ed2b9..d8ad881 100644
--- a/include/arch/aarch64/arch.h
+++ b/include/arch/aarch64/arch.h
@@ -24,6 +24,9 @@
#define MIDR_PN_MASK U(0xfff)
#define MIDR_PN_SHIFT U(0x4)
+/* Extracts the CPU part number from MIDR for checking CPU match */
+#define EXTRACT_PARTNUM(x) ((x >> MIDR_PN_SHIFT) & MIDR_PN_MASK)
+
/*******************************************************************************
* MPIDR macros
******************************************************************************/
diff --git a/include/common/feat_detect.h b/include/common/feat_detect.h
index 788dfb3..b85e1ce 100644
--- a/include/common/feat_detect.h
+++ b/include/common/feat_detect.h
@@ -11,8 +11,9 @@
void detect_arch_features(void);
/* Macro Definitions */
-#define FEAT_STATE_DISABLED 0
-#define FEAT_STATE_ALWAYS 1
-#define FEAT_STATE_CHECK 2
+#define FEAT_STATE_DISABLED 0
+#define FEAT_STATE_ALWAYS 1
+#define FEAT_STATE_CHECK 2
+#define FEAT_STATE_CHECK_ASYMMETRIC 3
#endif /* FEAT_DETECT_H */
diff --git a/include/lib/cpus/aarch64/cortex_a520.h b/include/lib/cpus/aarch64/cortex_a520.h
index ed3401d..11ddea9 100644
--- a/include/lib/cpus/aarch64/cortex_a520.h
+++ b/include/lib/cpus/aarch64/cortex_a520.h
@@ -28,4 +28,15 @@
#define CORTEX_A520_CPUPWRCTLR_EL1 S3_0_C15_C2_7
#define CORTEX_A520_CPUPWRCTLR_EL1_CORE_PWRDN_BIT U(1)
+#ifndef __ASSEMBLER__
+#if ERRATA_A520_2938996
+long check_erratum_cortex_a520_2938996(long cpu_rev);
+#else
+static inline long check_erratum_cortex_a520_2938996(long cpu_rev)
+{
+ return 0;
+}
+#endif /* ERRATA_A520_2938996 */
+#endif /* __ASSEMBLER__ */
+
#endif /* CORTEX_A520_H */
diff --git a/include/lib/cpus/aarch64/cortex_x4.h b/include/lib/cpus/aarch64/cortex_x4.h
index d81c3ca..4b6af8b 100644
--- a/include/lib/cpus/aarch64/cortex_x4.h
+++ b/include/lib/cpus/aarch64/cortex_x4.h
@@ -34,4 +34,15 @@
#define CORTEX_X4_CPUACTLR5_EL1 S3_0_C15_C8_0
#define CORTEX_X4_CPUACTLR5_EL1_BIT_14 (ULL(1) << 14)
+#ifndef __ASSEMBLER__
+#if ERRATA_X4_2726228
+long check_erratum_cortex_x4_2726228(long cpu_rev);
+#else
+static inline long check_erratum_cortex_x4_2726228(long cpu_rev)
+{
+ return 0;
+}
+#endif /* ERRATA_X4_2726228 */
+#endif /* __ASSEMBLER__ */
+
#endif /* CORTEX_X4_H */
diff --git a/include/lib/cpus/errata.h b/include/lib/cpus/errata.h
index 2080898..a8eb84c 100644
--- a/include/lib/cpus/errata.h
+++ b/include/lib/cpus/errata.h
@@ -25,12 +25,21 @@
#define ERRATUM_MITIGATED ERRATUM_CHOSEN + ERRATUM_CHOSEN_SIZE
#define ERRATUM_ENTRY_SIZE ERRATUM_MITIGATED + ERRATUM_MITIGATED_SIZE
+/* Errata status */
+#define ERRATA_NOT_APPLIES 0
+#define ERRATA_APPLIES 1
+#define ERRATA_MISSING 2
+
#ifndef __ASSEMBLER__
#include <lib/cassert.h>
void print_errata_status(void);
void errata_print_msg(unsigned int status, const char *cpu, const char *id);
+#if ERRATA_A520_2938996 || ERRATA_X4_2726228
+unsigned int check_if_affected_core(void);
+#endif
+
/*
* NOTE that this structure will be different on AArch32 and AArch64. The
* uintptr_t will reflect the change and the alignment will be correct in both.
@@ -74,11 +83,6 @@
#endif /* __ASSEMBLER__ */
-/* Errata status */
-#define ERRATA_NOT_APPLIES 0
-#define ERRATA_APPLIES 1
-#define ERRATA_MISSING 2
-
/* Macro to get CPU revision code for checking errata version compatibility. */
#define CPU_REV(r, p) ((r << 4) | p)
diff --git a/include/lib/el3_runtime/context_mgmt.h b/include/lib/el3_runtime/context_mgmt.h
index 7451b85..b7b73e6 100644
--- a/include/lib/el3_runtime/context_mgmt.h
+++ b/include/lib/el3_runtime/context_mgmt.h
@@ -44,6 +44,7 @@
void cm_manage_extensions_el3(void);
void manage_extensions_nonsecure_per_world(void);
void cm_el3_arch_init_per_world(per_world_context_t *per_world_ctx);
+void cm_handle_asymmetric_features(void);
#endif
#if CTX_INCLUDE_EL2_REGS
@@ -95,6 +96,7 @@
void cm_set_next_context(void *context);
static inline void cm_manage_extensions_el3(void) {}
static inline void manage_extensions_nonsecure_per_world(void) {}
+static inline void cm_handle_asymmetric_features(void) {}
#endif /* __aarch64__ */
#endif /* CONTEXT_MGMT_H */
diff --git a/lib/cpus/aarch64/cortex_a520.S b/lib/cpus/aarch64/cortex_a520.S
index 74ecbf7..b8f1468 100644
--- a/lib/cpus/aarch64/cortex_a520.S
+++ b/lib/cpus/aarch64/cortex_a520.S
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021-2023, Arm Limited. All rights reserved.
+ * Copyright (c) 2021-2024, Arm Limited. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -11,6 +11,9 @@
#include <cpu_macros.S>
#include <plat_macros.S>
+/* .global erratum_cortex_a520_2938996_wa */
+.global check_erratum_cortex_a520_2938996
+
/* Hardware handled coherency */
#if HW_ASSISTED_COHERENCY == 0
#error "Cortex A520 must be compiled with HW_ASSISTED_COHERENCY enabled"
@@ -32,6 +35,25 @@
workaround_reset_end cortex_a520, ERRATUM(2858100)
check_erratum_ls cortex_a520, ERRATUM(2858100), CPU_REV(0, 1)
+
+workaround_runtime_start cortex_a520, ERRATUM(2938996), ERRATA_A520_2938996, CORTEX_A520_MIDR
+workaround_runtime_end cortex_a520, ERRATUM(2938996)
+
+check_erratum_custom_start cortex_a520, ERRATUM(2938996)
+
+ /* This erratum needs to be enabled for r0p0 and r0p1.
+ * Check if revision is less than or equal to r0p1.
+ */
+
+#if ERRATA_A520_2938996
+ mov x1, #1
+ b cpu_rev_var_ls
+#else
+ mov x0, #ERRATA_MISSING
+#endif
+ ret
+check_erratum_custom_end cortex_a520, ERRATUM(2938996)
+
/* ----------------------------------------------------
* HW will do the cache maintenance while powering down
* ----------------------------------------------------
diff --git a/lib/cpus/aarch64/cortex_x4.S b/lib/cpus/aarch64/cortex_x4.S
index 9f822af..7c9a5a4 100644
--- a/lib/cpus/aarch64/cortex_x4.S
+++ b/lib/cpus/aarch64/cortex_x4.S
@@ -22,10 +22,30 @@
#error "Cortex X4 supports only AArch64. Compile with CTX_INCLUDE_AARCH32_REGS=0"
#endif
+.global check_erratum_cortex_x4_2726228
+
#if WORKAROUND_CVE_2022_23960
wa_cve_2022_23960_bhb_vector_table CORTEX_X4_BHB_LOOP_COUNT, cortex_x4
#endif /* WORKAROUND_CVE_2022_23960 */
+workaround_runtime_start cortex_x4, ERRATUM(2726228), ERRATA_X4_2726228, CORTEX_X4_MIDR
+workaround_runtime_end cortex_x4, ERRATUM(2726228)
+
+check_erratum_custom_start cortex_x4, ERRATUM(2726228)
+
+ /* This erratum needs to be enabled for r0p0 and r0p1.
+ * Check if revision is less than or equal to r0p1.
+ */
+
+#if ERRATA_X4_2726228
+ mov x1, #1
+ b cpu_rev_var_ls
+#else
+ mov x0, #ERRATA_MISSING
+#endif
+ ret
+check_erratum_custom_end cortex_x4, ERRATUM(2726228)
+
workaround_runtime_start cortex_x4, ERRATUM(2740089), ERRATA_X4_2740089
/* dsb before isb of power down sequence */
dsb sy
diff --git a/lib/cpus/cpu-ops.mk b/lib/cpus/cpu-ops.mk
index c9ff110..1a9ee72 100644
--- a/lib/cpus/cpu-ops.mk
+++ b/lib/cpus/cpu-ops.mk
@@ -823,6 +823,10 @@
# cpu and is fixed in r0p1.
CPU_FLAG_LIST += ERRATA_X4_2701112
+# Flag to apply erratum 2726228 workaround during warmboot. This erratum
+# applies to all revisions <= r0p1 of the Cortex-X4 cpu, it is fixed in r0p2.
+CPU_FLAG_LIST += ERRATA_X4_2726228
+
# Flag to apply erratum 2740089 workaround during powerdown. This erratum
# applies to all revisions <= r0p1 of the Cortex-X4 cpu, it is fixed in r0p2.
CPU_FLAG_LIST += ERRATA_X4_2740089
@@ -896,6 +900,10 @@
# applies to revision r0p0 and r0p1 of the Cortex-A520 cpu and is still open.
CPU_FLAG_LIST += ERRATA_A520_2858100
+# Flag to apply erratum 2938996 workaround during reset. This erratum
+# applies to revision r0p0 and r0p1 of the Cortex-A520 cpu and is fixed in r0p2.
+CPU_FLAG_LIST += ERRATA_A520_2938996
+
# Flag to apply erratum 2331132 workaround during reset. This erratum applies
# to revisions r0p0, r0p1 and r0p2. It is still open.
CPU_FLAG_LIST += ERRATA_V2_2331132
diff --git a/lib/cpus/errata_common.c b/lib/cpus/errata_common.c
new file mode 100644
index 0000000..9801245
--- /dev/null
+++ b/lib/cpus/errata_common.c
@@ -0,0 +1,30 @@
+/*
+ * Copyright (c) 2024, Arm Limited and Contributors. All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+/* Runtime C routines for errata workarounds and common routines */
+
+#include <arch.h>
+#include <arch_helpers.h>
+#include <cortex_a520.h>
+#include <cortex_x4.h>
+#include <lib/cpus/cpu_ops.h>
+#include <lib/cpus/errata.h>
+
+#if ERRATA_A520_2938996 || ERRATA_X4_2726228
+unsigned int check_if_affected_core(void)
+{
+ uint32_t midr_val = read_midr();
+ long rev_var = cpu_get_rev_var();
+
+ if (EXTRACT_PARTNUM(midr_val) == EXTRACT_PARTNUM(CORTEX_A520_MIDR)) {
+ return check_erratum_cortex_a520_2938996(rev_var);
+ } else if (EXTRACT_PARTNUM(midr_val) == EXTRACT_PARTNUM(CORTEX_X4_MIDR)) {
+ return check_erratum_cortex_x4_2726228(rev_var);
+ }
+
+ return ERRATA_NOT_APPLIES;
+}
+#endif
diff --git a/lib/el3_runtime/aarch64/context.S b/lib/el3_runtime/aarch64/context.S
index 5977c92..ab9d4b6 100644
--- a/lib/el3_runtime/aarch64/context.S
+++ b/lib/el3_runtime/aarch64/context.S
@@ -315,7 +315,7 @@
* always enable DIT in EL3
*/
#if ENABLE_FEAT_DIT
-#if ENABLE_FEAT_DIT == 2
+#if ENABLE_FEAT_DIT >= 2
mrs x8, id_aa64pfr0_el1
and x8, x8, #(ID_AA64PFR0_DIT_MASK << ID_AA64PFR0_DIT_SHIFT)
cbz x8, 1f
@@ -339,8 +339,7 @@
.macro restore_mpam3_el3
#if ENABLE_FEAT_MPAM
-#if ENABLE_FEAT_MPAM == 2
-
+#if ENABLE_FEAT_MPAM >= 2
mrs x8, id_aa64pfr0_el1
lsr x8, x8, #(ID_AA64PFR0_MPAM_SHIFT)
and x8, x8, #(ID_AA64PFR0_MPAM_MASK)
diff --git a/lib/el3_runtime/aarch64/context_mgmt.c b/lib/el3_runtime/aarch64/context_mgmt.c
index 15db9e5..ce3a4da 100644
--- a/lib/el3_runtime/aarch64/context_mgmt.c
+++ b/lib/el3_runtime/aarch64/context_mgmt.c
@@ -19,6 +19,8 @@
#include <common/debug.h>
#include <context.h>
#include <drivers/arm/gicv3.h>
+#include <lib/cpus/cpu_ops.h>
+#include <lib/cpus/errata.h>
#include <lib/el3_runtime/context_mgmt.h>
#include <lib/el3_runtime/cpu_data.h>
#include <lib/el3_runtime/pubsub_events.h>
@@ -1523,6 +1525,45 @@
}
#endif /* CTX_INCLUDE_EL2_REGS */
+#if IMAGE_BL31
+/*********************************************************************************
+* This function allows Architecture features asymmetry among cores.
+* TF-A assumes that all the cores in the platform has architecture feature parity
+* and hence the context is setup on different core (e.g. primary sets up the
+* context for secondary cores).This assumption may not be true for systems where
+* cores are not conforming to same Arch version or there is CPU Erratum which
+* requires certain feature to be be disabled only on a given core.
+*
+* This function is called on secondary cores to override any disparity in context
+* setup by primary, this would be called during warmboot path.
+*********************************************************************************/
+void cm_handle_asymmetric_features(void)
+{
+#if ENABLE_SPE_FOR_NS == FEAT_STATE_CHECK_ASYMMETRIC
+ cpu_context_t *spe_ctx = cm_get_context(NON_SECURE);
+
+ assert(spe_ctx != NULL);
+
+ if (is_feat_spe_supported()) {
+ spe_enable(spe_ctx);
+ } else {
+ spe_disable(spe_ctx);
+ }
+#endif
+#if ERRATA_A520_2938996 || ERRATA_X4_2726228
+ cpu_context_t *trbe_ctx = cm_get_context(NON_SECURE);
+
+ assert(trbe_ctx != NULL);
+
+ if (check_if_affected_core() == ERRATA_APPLIES) {
+ if (is_feat_trbe_supported()) {
+ trbe_disable(trbe_ctx);
+ }
+ }
+#endif
+}
+#endif
+
/*******************************************************************************
* This function is used to exit to Non-secure world. If CTX_INCLUDE_EL2_REGS
* is enabled, it restores EL1 and EL2 sysreg contexts instead of directly
@@ -1531,6 +1572,18 @@
******************************************************************************/
void cm_prepare_el3_exit_ns(void)
{
+#if IMAGE_BL31
+ /*
+ * Check and handle Architecture feature asymmetry among cores.
+ *
+ * In warmboot path secondary cores context is initialized on core which
+ * did CPU_ON SMC call, if there is feature asymmetry in these cores handle
+ * it in this function call.
+ * For Symmetric cores this is an empty function.
+ */
+ cm_handle_asymmetric_features();
+#endif
+
#if CTX_INCLUDE_EL2_REGS
#if ENABLE_ASSERTIONS
cpu_context_t *ctx = cm_get_context(NON_SECURE);
diff --git a/lib/romlib/Makefile b/lib/romlib/Makefile
index 9859ce1..29fbf78 100644
--- a/lib/romlib/Makefile
+++ b/lib/romlib/Makefile
@@ -45,7 +45,7 @@
.PHONY: all clean distclean
-all: $(BUILD_DIR)/romlib.bin $(LIB_DIR)/libwrappers.a
+all: $(BUILD_DIR)/romlib.bin $(BUILD_DIR)/romlib.ldflags $(LIB_DIR)/libwrappers.a
%.o: %.s | $$(@D)/
$(s)echo " AS $@"
@@ -89,6 +89,10 @@
$(s)echo " TBL $@"
$(q)$(ROMLIB_GEN) gentbl --output $@ --bti=$(ENABLE_BTI) $<
+$(BUILD_DIR)/romlib.ldflags: ../../$(PLAT_DIR)/jmptbl.i
+ $(s)echo " LDFLAGS $@"
+ $(q)$(ROMLIB_GEN) link-flags $< > $@
+
clean:
$(q)rm -f $(BUILD_DIR)/*
diff --git a/lib/romlib/romlib_generator.py b/lib/romlib/romlib_generator.py
index 0682dd4..8d2e88d 100755
--- a/lib/romlib/romlib_generator.py
+++ b/lib/romlib/romlib_generator.py
@@ -182,6 +182,22 @@
template_name = "jmptbl_entry_" + item["type"] + bti + ".S"
output_file.write(self.build_template(template_name, item, True))
+class LinkArgs(RomlibApplication):
+ """ Generates the link arguments to wrap functions. """
+
+ def __init__(self, prog):
+ RomlibApplication.__init__(self, prog)
+ self.args.add_argument("file", help="Input file")
+
+ def main(self):
+ index_file_parser = IndexFileParser()
+ index_file_parser.parse(self.config.file)
+
+ fns = [item["function_name"] for item in index_file_parser.items
+ if not item["patch"] and item["type"] != "reserved"]
+
+ print(" ".join("-Wl,--wrap " + f for f in fns))
+
class WrapperGenerator(RomlibApplication):
"""
Generates a wrapper function for each entry in the index file except for the ones that contain
@@ -214,21 +230,19 @@
if item["type"] == "reserved" or item["patch"]:
continue
- asm = self.config.b + "/" + item["function_name"] + ".s"
- if self.config.list:
- # Only listing files
- files.append(asm)
- else:
- with open(asm, "w") as asm_file:
- # The jump instruction is 4 bytes but BTI requires and extra instruction so
- # this makes it 8 bytes per entry.
- function_offset = item_index * (8 if self.config.bti else 4)
+ if not self.config.list:
+ # The jump instruction is 4 bytes but BTI requires and extra instruction so
+ # this makes it 8 bytes per entry.
+ function_offset = item_index * (8 if self.config.bti else 4)
- item["function_offset"] = function_offset
- asm_file.write(self.build_template("wrapper" + bti + ".S", item))
+ item["function_offset"] = function_offset
+ files.append(self.build_template("wrapper" + bti + ".S", item))
if self.config.list:
- print(" ".join(files))
+ print(self.config.b + "/wrappers.s")
+ else:
+ with open(self.config.b + "/wrappers.s", "w") as asm_file:
+ asm_file.write("\n".join(files))
class VariableGenerator(RomlibApplication):
""" Generates the jump table global variable with the absolute address in ROM. """
@@ -258,7 +272,8 @@
if __name__ == "__main__":
APPS = {"genvar": VariableGenerator, "pre": IndexPreprocessor,
- "gentbl": TableGenerator, "genwrappers": WrapperGenerator}
+ "gentbl": TableGenerator, "genwrappers": WrapperGenerator,
+ "link-flags": LinkArgs}
if len(sys.argv) < 2 or sys.argv[1] not in APPS:
print("usage: romlib_generator.py [%s] [args]" % "|".join(APPS.keys()), file=sys.stderr)
diff --git a/lib/romlib/templates/wrapper.S b/lib/romlib/templates/wrapper.S
index 734a68a..576474a 100644
--- a/lib/romlib/templates/wrapper.S
+++ b/lib/romlib/templates/wrapper.S
@@ -3,8 +3,9 @@
*
* SPDX-License-Identifier: BSD-3-Clause
*/
- .globl ${function_name}
-${function_name}:
+ .section .text.__wrap_${function_name}
+ .globl __wrap_${function_name}
+__wrap_${function_name}:
ldr x17, =jmptbl
mov x16, #${function_offset}
ldr x17, [x17]
diff --git a/lib/romlib/templates/wrapper_bti.S b/lib/romlib/templates/wrapper_bti.S
index ba9b11c..0dc316c 100644
--- a/lib/romlib/templates/wrapper_bti.S
+++ b/lib/romlib/templates/wrapper_bti.S
@@ -3,8 +3,9 @@
*
* SPDX-License-Identifier: BSD-3-Clause
*/
- .globl ${function_name}
-${function_name}:
+ .section .text.__wrap_${function_name}
+ .globl __wrap_${function_name}
+__wrap_${function_name}:
bti jc
ldr x17, =jmptbl
mov x16, #${function_offset}
diff --git a/make_helpers/build_macros.mk b/make_helpers/build_macros.mk
index 7050916..f523074 100644
--- a/make_helpers/build_macros.mk
+++ b/make_helpers/build_macros.mk
@@ -465,6 +465,10 @@
$(patsubst %.S,$(BUILD_DIR)/%,$(1))
endef
+ifeq ($(USE_ROMLIB),1)
+WRAPPER_FLAGS := @${BUILD_PLAT}/romlib/romlib.ldflags
+endif
+
# MAKE_BL macro defines the targets and options to build each BL image.
# Arguments:
# $(1) = BL stage
@@ -514,11 +518,11 @@
--map --list="$(MAPFILE)" --scatter=${PLAT_DIR}/scat/${1}.scat \
$(LDPATHS) $(LIBWRAPPER) $(LDLIBS) $(BL_LIBS) $(OBJS)
else ifeq ($($(ARCH)-ld-id),gnu-gcc)
- $$(q)$($(ARCH)-ld) -o $$@ $$(TF_LDFLAGS) $$(LDFLAGS) $(BL_LDFLAGS) -Wl,-Map=$(MAPFILE) \
+ $$(q)$($(ARCH)-ld) -o $$@ $$(TF_LDFLAGS) $$(LDFLAGS) $$(WRAPPER_FLAGS) $(BL_LDFLAGS) -Wl,-Map=$(MAPFILE) \
$(addprefix -Wl$(comma)--script$(comma),$(LINKER_SCRIPTS)) -Wl,--script,$(DEFAULT_LINKER_SCRIPT) \
$(OBJS) $(LDPATHS) $(LIBWRAPPER) $(LDLIBS) $(BL_LIBS)
else
- $$(q)$($(ARCH)-ld) -o $$@ $$(TF_LDFLAGS) $$(LDFLAGS) $(BL_LDFLAGS) -Map=$(MAPFILE) \
+ $$(q)$($(ARCH)-ld) -o $$@ $$(TF_LDFLAGS) $$(LDFLAGS) $$(WRAPPER_FLAGS) $(BL_LDFLAGS) -Map=$(MAPFILE) \
$(addprefix -T ,$(LINKER_SCRIPTS)) --script $(DEFAULT_LINKER_SCRIPT) \
$(OBJS) $(LDPATHS) $(LIBWRAPPER) $(LDLIBS) $(BL_LIBS)
endif
diff --git a/plat/arm/board/fvp/jmptbl.i b/plat/arm/board/fvp/jmptbl.i
index dc8032f..077283e 100644
--- a/plat/arm/board/fvp/jmptbl.i
+++ b/plat/arm/board/fvp/jmptbl.i
@@ -36,7 +36,6 @@
fdt fdt_get_name
fdt fdt_get_alias
fdt fdt_node_offset_by_phandle
-fdt fdt_subnode_offset
fdt fdt_add_subnode
mbedtls mbedtls_asn1_get_alg
mbedtls mbedtls_asn1_get_alg_null
diff --git a/plat/arm/board/tc/platform.mk b/plat/arm/board/tc/platform.mk
index fb70500..1a7289a 100644
--- a/plat/arm/board/tc/platform.mk
+++ b/plat/arm/board/tc/platform.mk
@@ -34,6 +34,7 @@
ENABLE_MPMM := 1
ENABLE_MPMM_FCONF := 1
ENABLE_FEAT_MTE2 := 2
+ENABLE_SPE_FOR_NS := 3
CTX_INCLUDE_AARCH32_REGS := 0
@@ -109,6 +110,9 @@
# CPU libraries for TARGET_PLATFORM=2
ifeq (${TARGET_PLATFORM}, 2)
+ERRATA_A520_2938996 := 1
+ERRATA_X4_2726228 := 1
+
TC_CPU_SOURCES += lib/cpus/aarch64/cortex_a520.S \
lib/cpus/aarch64/cortex_a720.S \
lib/cpus/aarch64/cortex_x4.S
@@ -116,6 +120,8 @@
# CPU libraries for TARGET_PLATFORM=3
ifeq (${TARGET_PLATFORM}, 3)
+ERRATA_A520_2938996 := 1
+
TC_CPU_SOURCES += lib/cpus/aarch64/cortex_a520.S \
lib/cpus/aarch64/cortex_a725.S \
lib/cpus/aarch64/cortex_x925.S
diff --git a/plat/nxp/s32/s32g274ardb2/plat_helpers.S b/plat/nxp/s32/s32g274ardb2/plat_helpers.S
index 193c884..10c0035 100644
--- a/plat/nxp/s32/s32g274ardb2/plat_helpers.S
+++ b/plat/nxp/s32/s32g274ardb2/plat_helpers.S
@@ -38,6 +38,8 @@
/* void plat_crash_console_flush(void); */
func plat_crash_console_flush
+ mov_imm x0, UART_BASE
+ b console_linflex_core_flush
ret
endfunc plat_crash_console_flush
diff --git a/services/std_svc/errata_abi/cpu_errata_info.h b/services/std_svc/errata_abi/cpu_errata_info.h
index 61e1076..d688431 100644
--- a/services/std_svc/errata_abi/cpu_errata_info.h
+++ b/services/std_svc/errata_abi/cpu_errata_info.h
@@ -8,6 +8,7 @@
#define ERRATA_CPUSPEC_H
#include <stdint.h>
+#include <arch.h>
#include <arch_helpers.h>
#if __aarch64__
@@ -31,8 +32,6 @@
/* Default values for unused memory in the array */
#define UNDEF_ERRATA {UINT_MAX, UCHAR_MAX, UCHAR_MAX}
-#define EXTRACT_PARTNUM(x) ((x >> MIDR_PN_SHIFT) & MIDR_PN_MASK)
-
#define RXPX_RANGE(x, y, z) (((x >= y) && (x <= z)) ? true : false)
/*