aboutsummaryrefslogtreecommitdiff
path: root/include/arch/aarch32/arch_helpers.h
AgeCommit message (Collapse)Author
2024-02-28feat(cpufeat): added few helper functionsManish Pandey
Following utility functions/bit definitions done - Write a helper function to return the presence of following features - FEAT_UAO - FEAT_EBEP - FEAT_SEBEP - FEAT_SSBS - FEAT_NMI - FEAT_PAN - Add definition of some missing bits of SPSR. - Add GCSCR_EL1 register encoding and accessor function. Signed-off-by: Manish Pandey <manish.pandey2@arm.com> Change-Id: Ifcead0dd8e3b32096e4ab810dde5d582a889785a
2023-06-29refactor(pmu): convert FEAT_MTPMU to C and move to persistent register initBoyan Karatotev
The FEAT_MTPMU feature disable runs very early after reset. This means, it needs to be written in assembly, since the C runtime has not been initialised yet. However, there is no need for it to be initialised so soon. The PMU state is only relevant after TF-A has relinquished control. The code to do this is also very verbose and difficult to read. Delaying the initialisation allows for it to happen with the rest of the PMU. Align with FEAT_STATE in the process. BREAKING CHANGE: This patch explicitly breaks the EL2 entry path. It is currently unsupported. Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Change-Id: I2aa659d026fbdb75152469f6d19812ece3488c6f
2023-06-29feat(pmu): introduce pmuv3 lib/extensions folderBoyan Karatotev
The enablement code for the PMU is scattered and difficult to track down. Factor out the feature into its own lib/extensions folder and consolidate the implementation. Treat it is as an architecturally mandatory feature as it is currently. Additionally, do some cleanup on AArch64. Setting overflow bits in PMCR_EL0 is irrelevant for firmware so don't do it. Then delay the PMU initialisation until the context management stage which simplifies the early environment assembly. One side effect is that the PMU might count before this happens so reset all counters to 0 to prevent any leakage. Finally, add an enable to manage_extensions_realm() as realm world uses the pmu. This introduces the HPMN fixup to realm world. Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Change-Id: Ie13a8625820ecc5fbfa467dc6ca18025bf6a9cd3
2023-05-23fix(qemu): fix 32-bit builds with stack protectorAndre Przywara
When using the ENABLE_STACK_PROTECTOR=strong build option, the QEMU code will try to use the RNDR CPU instructions to initialise the stack canary. Since the instructions are defined for AArch64 only, this will fail to build for AArch32. And even though we now always return "false" when asked about the availability of the RNDR instruction, the compiler will still leave the reference to read_rdnr() in, if optimisations are turned off (-O0). Avoid this by providing a dummy read_rndr() implementation, that makes the linker happy in any case. This fixes the QEMU build for AArch32 with ENABLE_STACK_PROTECTOR=strong Change-Id: Ibf450ba4a46167fdf3a14a527d338350ced8b5ba Signed-off-by: Andre Przywara <andre.przywara@arm.com>
2023-05-23feat(cpufeat): add AArch32 PAN detection supportAndre Przywara
FEAT_PAN is implemented in AArch32 as well, provide the helper functions to query the feature availability at runtime. Change-Id: I375e3eb7b05955ea28a092ba99bb93302af48a0e Signed-off-by: Andre Przywara <andre.przywara@arm.com>
2022-09-14feat(gic): add APIs to raise NS and S-EL1 SGIsFlorian Lugou
This patch adds two helper functions: - plat_ic_raise_ns_sgi to raise a NS SGI - plat_ic_raise_s_el1_sgi to raise a S-EL1 SGI Signed-off-by: Florian Lugou <florian.lugou@provenrun.com> Change-Id: I6f262dd1da1d77fec3f850eb74189e726b8e24da
2021-08-26feat(trf): enable trace filter control register access from lower NS ELManish V Badarkhe
Introduced a build flag 'ENABLE_TRF_FOR_NS' to enable trace filter control registers access in NS-EL2, or NS-EL1 (when NS-EL2 is implemented but unused). Change-Id: If3f53b8173a5573424b9a405a4bd8c206ffdeb8c Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2021-08-26feat(sys_reg_trace): enable trace system registers access from lower NS ELsManish V Badarkhe
Introduced a build flag 'ENABLE_SYS_REG_TRACE_FOR_NS' to enable trace system registers access in NS-EL2, or NS-EL1 (when NS-EL2 is implemented but unused). Change-Id: Idc1acede4186e101758cbf7bed5af7b634d7d18d Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2021-02-25Enable v8.6 AMU enhancements (FEAT_AMUv1p1)johpow01
ARMv8.6 adds virtual offset registers to support virtualization of the event counters in EL1 and EL0. This patch enables support for this feature in EL3 firmware. Signed-off-by: John Powell <john.powell@arm.com> Change-Id: I7ee1f3d9f554930bf5ef6f3d492e932e6d95b217
2020-10-27aarch64/arm: Add compiler barrier to barrier instructionsAndre Przywara
When issuing barrier instructions like DSB or DMB, we must make sure that the compiler does not undermine out efforts to fence off instructions. Currently the compiler is free to move the barrier instruction around, in respect to former or later memory access statements, which is not what we want. Add a compiler barrier to the inline assembly statement in our DEFINE_SYSOP_TYPE_FUNC macro, to make sure memory accesses are not reordered by the compiler. This is in line with Linux' definition: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/barrier.h Since those instructions share a definition, apart from DSB and DMB this now also covers some TLBI instructions. Having a compiler barrier there also is useful, although we probably have stronger barriers in place already. Change-Id: If6fe97b13a562643a643efc507cb4aad29daa5b6 Reported-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
2020-08-10TF-A AMU extension: fix detection of group 1 counters.Alexei Fedorov
This patch fixes the bug when AMUv1 group1 counters was always assumed being implemented without checking for its presence which was causing exception otherwise. The AMU extension code was also modified as listed below: - Added detection of AMUv1 for ARMv8.6 - 'PLAT_AMU_GROUP1_NR_COUNTERS' build option is removed and number of group1 counters 'AMU_GROUP1_NR_COUNTERS' is now calculated based on 'AMU_GROUP1_COUNTERS_MASK' value - Added bit fields definitions and access functions for AMCFGR_EL0/AMCFGR and AMCGCR_EL0/AMCGCR registers - Unification of amu.c Aarch64 and Aarch32 source files - Bug fixes and TF-A coding style compliant changes. Change-Id: I14e407be62c3026ebc674ec7045e240ccb71e1fb Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2020-04-15Provide a hint to power controller for DSU cluster power downMadhukar Pappireddy
By writing 0 to CLUSTERPWRDN DSU register bit 0, we send an advisory to the power controller that cluster power is not required when all cores are powered down. The AArch32 CLUSTERPWRDN register is architecturally mapped to the AArch64 CLUSTERPWRDN_EL1 register Change-Id: Ie6e67c1c7d811fa25c51e2e405ca7f59bd20c81b Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
2020-04-07locks: bakery: use is_dcache_enabled() helperMasahiro Yamada
bakery_lock_normal.c uses the raw register accessor, read_sctlr(_el3) to check whether the dcache is enabled. Using is_dcache_enabled() is cleaner, and a good abstraction for the library code like this. A problem is is_dcache_enabled() is declared in the local header, lib/xlat_tables_v2/xlat_tables_private.h I searched for a good place to declare this helper. Moving it to arch_helpers.h, closed to cache operation helpers, looks good enough to me. I also changed the type of 'is_cached' to bool for consistency, and to avoid MISRA warnings. Change-Id: I9b016f67bc8eade25c316aa9c0db0fa4cd375b79 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-02-28Cortex-A53: Workarounds for 819472, 824069 and 827319Ambroise Vincent
The workarounds for these errata are so closely related that it is better to only have one patch to make it easier to understand. Change-Id: I0287fa69aefa8b72f884833f6ed0e7775ca834e9 Signed-off-by: Ambroise Vincent <ambroise.vincent@arm.com>
2019-01-11xlat v2: Dynamically detect need for CnP bitAntonio Nino Diaz
ARMv8.2-TTCNP is mandatory from ARMv8.2 onwards, but it can be implemented in CPUs that don't implement all mandatory 8.2 features (and so have to claim to be a lower version). This patch removes usage of the ARM_ARCH_AT_LEAST() macro and uses system ID registers to detect whether it is needed to set the bit or not. Change-Id: I7bcbf0c7c937590dfc2ca668cfd9267c50f7d52c Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-01-04Sanitise includes across codebaseAntonio Nino Diaz
Enforce full include path for includes. Deprecate old paths. The following folders inside include/lib have been left unchanged: - include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH} The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them). For example, this patch had to be created because two headers were called the same way: e0ea0928d5b7 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3a282 ("drivers: add tzc380 support"). This problem was introduced in commit 4ecca33988b9 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems. Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged. Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-01-04Reorganize architecture-dependent header filesAntonio Nino Diaz
The architecture dependant header files in include/lib/${ARCH} and include/common/${ARCH} have been moved to /include/arch/${ARCH}. Change-Id: I96f30fdb80b191a51448ddf11b1d4a0624c03394 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>