aboutsummaryrefslogtreecommitdiff
path: root/bl32
AgeCommit message (Collapse)Author
2020-12-11Add support for FEAT_MTPMU for Armv8.6Javier Almansa Sobrino
If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented as well, it is possible to control whether PMU counters take into account events happening on other threads. If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit leaving it to effective state of 0 regardless of any write to it. This patch introduces the DISABLE_MTPMU flag, which allows to diable multithread event count from EL3 (or EL2). The flag is disabled by default so the behavior is consistent with those architectures that do not implement FEAT_MTPMU. Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com> Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
2020-11-13TSP: Fix GCC 11.0.0 compilation error.Alexei Fedorov
This patch fixes the following compilation error reported by aarch64-none-elf-gcc 11.0.0: bl32/tsp/tsp_main.c: In function 'tsp_smc_handler': bl32/tsp/tsp_main.c:393:9: error: 'tsp_get_magic' accessing 32 bytes in a region of size 16 [-Werror=stringop-overflow=] 393 | tsp_get_magic(service_args); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ bl32/tsp/tsp_main.c:393:9: note: referencing argument 1 of type 'uint64_t *' {aka 'long long unsigned int *'} In file included from bl32/tsp/tsp_main.c:19: bl32/tsp/tsp_private.h:64:6: note: in a call to function 'tsp_get_magic' 64 | void tsp_get_magic(uint64_t args[4]); | ^~~~~~~~~~~~~ by changing declaration of tsp_get_magic function from void tsp_get_magic(uint64_t args[4]); to uint128_t tsp_get_magic(void); which returns arguments directly in x0 and x1 registers. In bl32\tsp\tsp_main.c the current tsp_smc_handler() implementation calls tsp_get_magic(service_args); , where service_args array is declared as uint64_t service_args[2]; and tsp_get_magic() in bl32\tsp\aarch64\tsp_request.S copies only 2 registers in output buffer: /* Store returned arguments to the array */ stp x0, x1, [x4, #0] Change-Id: Ib34759fc5d7bb803e6c734540d91ea278270b330 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2020-10-18Merge "Increase type widths to satisfy width requirements" into integrationJoanna Farley
2020-10-12Increase type widths to satisfy width requirementsJimmy Brisson
Usually, C has no problem up-converting types to larger bit sizes. MISRA rule 10.7 requires that you not do this, or be very explicit about this. This resolves the following required rule: bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None> The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U | 0x3c0U" (32 bits) is less that the right hand operand "18446744073709547519ULL" (64 bits). This also resolves MISRA defects such as: bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)] In the expression "3U << 20", shifting more than 7 bits, the number of bits in the essential type of the left expression, "3U", is not allowed. Further, MISRA requires that all shifts don't overflow. The definition of PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues. This fixes the violation by changing the definition to 1UL << 12. Since this uses 32bits, it should not create any issues for aarch32. This patch also contains a fix for a build failure in the sun50i_a64 platform. Specifically, these misra fixes removed a single and instruction, 92407e73 and x19, x19, #0xffffffff from the cm_setup_context function caused a relocation in psci_cpus_on_start to require a linker-generated stub. This increased the size of the .text section and caused an alignment later on to go over a page boundary and round up to the end of RAM before placing the .data section. This sectionn is of non-zero size and therefore causes a link error. The fix included in this reorders the functions during link time without changing their ording with respect to alignment. Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16 Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
2020-10-05bl32: add an assert on BL32_SIZE in sp_min.ld.SYann Gautier
This assert is present in all other linker scripts. This checks the size of BL32 doesn't exceed its defined limit. Change-Id: I0005959b5591d3eebd870045adafe437108bc9e1 Signed-off-by: Yann Gautier <yann.gautier@st.com>
2020-10-05bl32: use SORT_BY_ALIGNMENT macro in sp_min.ld.SYann Gautier
The macro SORT_BY_ALIGNMENT is used for .text* and .rodata*. This allows reducing the space lost to object alignment. This is an alignment with the following patch: ebd6efae67c6a086bc97d807a638bde324d936dc Some comments are also aligned with other linker scripts. Change-Id: I2ea59edb445af0ed8c08fd883ffbf56852570d0c Signed-off-by: Yann Gautier <yann.gautier@st.com>
2020-06-29linker_script: move .rela.dyn section to bl_common.ld.hMasahiro Yamada
The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP. Move it to the common header file. I slightly changed the definition so that we can do "RELA_SECTION >RAM". It still produced equivalent elf images. Please note I got rid of '.' from the VMA field. Otherwise, if the end of previous .data section is not 8-byte aligned, it fails to link. aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1 Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-25linker_script: move .data section to bl_common.ld.hMasahiro Yamada
Move the data section to the common header. I slightly tweaked some scripts as follows: [1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1 by default, but overridden by bl1.ld.S. Currently, ALIGN(16) of the .data section is redundant because commit 412865907699 ("Fix boot failures on some builds linked with ld.lld.") padded out the previous section to work around the issue of LLD version <= 10.0. This will be fixed in the future release of LLVM, so I am keeping the proper way to align LMA. [2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead of __DATA_{START,END}__. I put them out of the .data section. [3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT() for all images, so the symbol order in those three will change, but I do not think it is a big deal. Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-24linker_script: move stacks section to bl_common.ld.hMasahiro Yamada
The stacks section is the same for all BL linker scripts. Move it to the common header file. Change-Id: Ibd253488667ab4f69702d56ff9e9929376704f6c Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-02linker_script: move bss section to bl_common.ld.hMasahiro Yamada
Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL and PMF_TIMESTAMP, which previously existed only in BL31. This is not a big deal because unused data should not be compiled in the first place. I believe this should be controlled by BL*_SOURCES in Makefiles, not by linker scripts. I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3, BL31, BL31 for plat=uniphier. I did not see any more unexpected code addition. The bss section has bigger alignment. I added BSS_ALIGN for this. Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this change, the BSS symbols in SP_MIN will be sorted by the alignment. This is not a big deal (or, even better in terms of the image size). Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-02linker_script: replace common read-only data with RODATA_COMMONMasahiro Yamada
The common section data are repeated in many linker scripts (often twice in each script to support SEPARATE_CODE_AND_RODATA). When you add a new read-only data section, you end up with touching lots of places. After this commit, you will only need to touch bl_common.ld.h when you add a new section to RODATA_COMMON. Replace a series of RO section with RODATA_COMMON, which contains 6 sections, some of which did not exist before. This is not a big deal because unneeded data should not be compiled in the first place. I believe this should be controlled by BL*_SOURCES in Makefiles, not by linker scripts. When I was working on this commit, the BL1 image size increased due to the fconf_populator. Commit c452ba159c14 ("fconf: exclude fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue. I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3, BL31, BL31 for plat=uniphier. I did not see any more unexpected code addition. Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-02linker_script: move more common code to bl_common.ld.hMasahiro Yamada
These are mostly used to collect data from special structure, and repeated in many linker scripts. To differentiate the alignment size between aarch32/aarch64, I added a new macro STRUCT_ALIGN. While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional. As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE* are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array data are not populated. Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-03-31bl32: sp_min: reduce the alignment for fconf_populatorMasahiro Yamada
sp_min.ld.S is used for aarch32. ALIGN(4) is used for alignment of the other structures. I do not think struct fconf_populator is a special case. Let's use ALIGN(4) here too. Perhaps, this is just a copy-paste mistake of commit 26d1e0c33098 ("fconf: necessary modifications to support fconf in BL31 & SP_MIN"). Change-Id: I29f4c68680842c1b5ef913934b4ccf378e9bfcfb Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-03-20Bug fix: Protect TSP prints with lockMadhukar Pappireddy
CPUs use console to print debug/info messages. This critical section must be guarded by locks to avoid overlaps in messages from multiple CPUs. Change-Id: I786bf90072c1ed73c4f53d8c950979d95255e67e Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
2020-03-12Merge changes from topic "mp/enhanced_pal_hw" into integrationMark Dykes
* changes: plat/arm/fvp: populate pwr domain descriptor dynamically fconf: Extract topology node properties from HW_CONFIG dtb fconf: necessary modifications to support fconf in BL31 & SP_MIN fconf: enhancements to firmware configuration framework
2020-03-11fconf: necessary modifications to support fconf in BL31 & SP_MINMadhukar Pappireddy
Necessary infrastructure added to integrate fconf framework in BL31 & SP_MIN. Created few populator() functions which parse HW_CONFIG device tree and registered them with fconf framework. Many of the changes are only applicable for fvp platform. This patch: 1. Adds necessary symbols and sections in BL31, SP_MIN linker script 2. Adds necessary memory map entry for translation in BL31, SP_MIN 3. Creates an abstraction layer for hardware configuration based on fconf framework 4. Adds necessary changes to build flow (makefiles) 5. Minimal callback to read hw_config dtb for capturing properties related to GIC(interrupt-controller node) 6. updates the fconf documentation Change-Id: Ib6292071f674ef093962b9e8ba0d322b7bf919af Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
2020-03-11Factor xlat_table sections in linker scripts out into a header fileMasahiro Yamada
TF-A has so many linker scripts, at least one linker script for each BL image, and some platforms have their own ones. They duplicate quite similar code (and comments). When we add some changes to linker scripts, we end up with touching so many files. This is not nice in the maintainability perspective. When you look at Linux kernel, the common code is macrofied in include/asm-generic/vmlinux.lds.h, which is included from each arch linker script, arch/*/kernel/vmlinux.lds.S TF-A can follow this approach. Let's factor out the common code into include/common/bl_common.ld.h As a start point, this commit factors out the xlat_table section. Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-03-06TSP: corrected log informationManish Pandey
In CPU resume function, CPU suspend count was printed instead of CPU resume count. Signed-off-by: Manish Pandey <manish.pandey2@arm.com> Change-Id: I0c081dc03a4ccfb2129687f690667c5ceed00a5f
2020-01-24TSP: add PIE supportMasahiro Yamada
This implementation simply mimics that of BL31. Change-Id: Ibbaa4ca012d38ac211c52b0b3e97449947160e07 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-01-22Prevent speculative execution past ERETAnthony Steinhauser
Even though ERET always causes a jump to another address, aarch64 CPUs speculatively execute following instructions as if the ERET instruction was not a jump instruction. The speculative execution does not cross privilege-levels (to the jump target as one would expect), but it continues on the kernel privilege level as if the ERET instruction did not change the control flow - thus execution anything that is accidentally linked after the ERET instruction. Later, the results of this speculative execution are always architecturally discarded, however they can leak data using microarchitectural side channels. This speculative execution is very reliable (seems to be unconditional) and it manages to complete even relatively performance-heavy operations (e.g. multiple dependent fetches from uncached memory). This was fixed in Linux, FreeBSD, OpenBSD and Optee OS: https://github.com/torvalds/linux/commit/679db70801da9fda91d26caf13bf5b5ccc74e8e8 https://github.com/freebsd/freebsd/commit/29fb48ace4186a41c409fde52bcf4216e9e50b61 https://github.com/openbsd/src/commit/3a08873ece1cb28ace89fd65e8f3c1375cc98de2 https://github.com/OP-TEE/optee_os/commit/abfd092aa19f9c0251e3d5551e2d68a9ebcfec8a It is demonstrated in a SafeSide example: https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cc https://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c Signed-off-by: Anthony Steinhauser <asteinhauser@google.com> Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
2019-12-17pmf: Make the runtime instrumentation work on AArch32Bence Szépkúti
Ported the pmf asm macros and the asm code in the bl31 entrypoint necessary for the instrumentation to AArch32. Since smc dispatch is handled by the bl32 payload on AArch32, we provide this service only if AARCH32_SP=sp_min is set. Signed-off-by: Bence Szépkúti <bence.szepkuti@arm.com> Change-Id: Id33b7e9762ae86a4f4b40d7f1b37a90e5130c8ac
2019-09-26AArch32: Disable Secure Cycle CounterAlexei Fedorov
This patch changes implementation for disabling Secure Cycle Counter. For ARMv8.5 the counter gets disabled by setting SDCR.SCCD bit on CPU cold/warm boot. For the earlier architectures PMCR register is saved/restored on secure world entry/exit from/to Non-secure state, and cycle counting gets disabled by setting PMCR.DP bit. In 'include\aarch32\arch.h' header file new ARMv8.5-PMU related definitions were added. Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-09-13Refactor ARMv8.3 Pointer Authentication support codeAlexei Fedorov
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI. Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-09-09Enable MTE support in both secure and non-secure worldsJustin Chadwell
This patch adds support for the new Memory Tagging Extension arriving in ARMv8.5. MTE support is now enabled by default on systems that support at EL0. To enable it at ELx for both the non-secure and the secure world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving and restoring when necessary in order to prevent register leakage between the worlds. Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
2019-08-01Replace __ASSEMBLY__ with compiler-builtin __ASSEMBLER__Julius Werner
NOTE: __ASSEMBLY__ macro is now deprecated in favor of __ASSEMBLER__. All common C compilers predefine a macro called __ASSEMBLER__ when preprocessing a .S file. There is no reason for TF-A to define it's own __ASSEMBLY__ macro for this purpose instead. To unify code with the export headers (which use __ASSEMBLER__ to avoid one extra dependency), let's deprecate __ASSEMBLY__ and switch the code base over to the predefined standard. Change-Id: Id7d0ec8cf330195da80499c68562b65cb5ab7417 Signed-off-by: Julius Werner <jwerner@chromium.org>
2019-07-10Remove references to old project name from common filesJohn Tsichritzis
The project has been renamed from "Arm Trusted Firmware (ATF)" to "Trusted Firmware-A (TF-A)" long ago. A few references to the old project name that still remained in various places have now been removed. This change doesn't affect any platform files. Any "ATF" references inside platform files, still remain. Change-Id: Id97895faa5b1845e851d4d50f5750de7a55bf99e Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
2019-05-24Add support for Branch Target IdentificationAlexei Fedorov
This patch adds the functionality needed for platforms to provide Branch Target Identification (BTI) extension, introduced to AArch64 in Armv8.5-A by adding BTI instruction used to mark valid targets for indirect branches. The patch sets new GP bit [50] to the stage 1 Translation Table Block and Page entries to denote guarded EL3 code pages which will cause processor to trap instructions in protected pages trying to perform an indirect branch to any instruction other than BTI. BTI feature is selected by BRANCH_PROTECTION option which supersedes the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication and is disabled by default. Enabling BTI requires compiler support and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0. The assembly macros and helpers are modified to accommodate the BTI instruction. This is an experimental feature. Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3 is now made as an internal flag and BRANCH_PROTECTION flag should be used instead to enable Pointer Authentication. Note. USE_LIBROM=1 option is currently not supported. Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-04-25sp_min: allow inclusion of a platform-specific linker scriptHeiko Stuebner
Similar to bl31 allow sp_min to also include a platform-specific linker script. This allows for example to place specific code in other memories of the system, like resume code in sram, while the main tf-a lives in ddr. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Change-Id: I67642f7bfca036b5d51eb0fa092b479a647a9cc1
2019-04-25sp_min: make sp_min_warm_entrypoint publicHeiko Stuebner
Similar to bl31_warm_entrypoint, sp_min-based platforms may need that for special resume handling. Therefore move it from the private header to the sp_min platform header. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Change-Id: I40d9eb3ff77cff88d47c1ff51d53d9b2512cbd3e
2019-03-12Apply stricter speculative load restrictionJohn Tsichritzis
The SCTLR.DSSBS bit is zero by default thus disabling speculative loads. However, we also explicitly set it to zero for BL2 and TSP images when each image initialises its context. This is done to ensure that the image environment is initialised in a safe state, regardless of the reset value of the bit. Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
2019-02-27TSP: Enable pointer authentication supportAntonio Nino Diaz
The size increase after enabling options related to ARMv8.3-PAuth is: +----------------------------+-------+-------+-------+--------+ | | text | bss | data | rodata | +----------------------------+-------+-------+-------+--------+ | CTX_INCLUDE_PAUTH_REGS = 1 | +40 | +0 | +0 | +0 | | | 0.4% | | | | +----------------------------+-------+-------+-------+--------+ | ENABLE_PAUTH = 1 | +352 | +0 | +16 | +0 | | | 3.1% | | 15.8% | | +----------------------------+-------+-------+-------+--------+ Results calculated with the following build configuration: make PLAT=fvp SPD=tspd DEBUG=1 \ SDEI_SUPPORT=1 \ EL3_EXCEPTION_HANDLING=1 \ TSP_NS_INTR_ASYNC_PREEMPT=1 \ CTX_INCLUDE_PAUTH_REGS=1 \ ENABLE_PAUTH=1 Change-Id: I6cc1fe0b2345c547dcef66f98758c4eb55fe5ee4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-07locks: linker variables to calculate per-cpu bakery lock sizeVarun Wadekar
This patch introduces explicit linker variables to mark the start and end of the per-cpu bakery lock section to help bakery_lock_normal.c calculate the size of the section. This patch removes the previously used '__PERCPU_BAKERY_LOCK_SIZE__' linker variable to make the code uniform across GNU linker and ARM linker. Change-Id: Ie0c51702cbc0fe8a2076005344a1fcebb48e7cca Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
2019-02-01Remove duplicated definitions of linker symbolsAntonio Nino Diaz
Many parts of the code were duplicating symbols that are defined in include/common/bl_common.h. It is better to only use the definitions in this header. As all the symbols refer to virtual addresses, they have to be uintptr_t, not unsigned long. This has also been fixed in bl_common.h. Change-Id: I204081af78326ced03fb05f69846f229d324c711 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-01-15Correct typographical errorsPaul Beesley
Corrects typos in core code, documentation files, drivers, Arm platforms and services. None of the corrections affect code; changes are limited to comments and other documentation. Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d Signed-off-by: Paul Beesley <paul.beesley@arm.com>
2019-01-04Sanitise includes across codebaseAntonio Nino Diaz
Enforce full include path for includes. Deprecate old paths. The following folders inside include/lib have been left unchanged: - include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH} The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them). For example, this patch had to be created because two headers were called the same way: e0ea0928d5b7 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3a282 ("drivers: add tzc380 support"). This problem was introduced in commit 4ecca33988b9 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems. Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged. Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-11-08Standardise header guards across codebaseAntonio Nino Diaz
All identifiers, regardless of use, that start with two underscores are reserved. This means they can't be used in header guards. The style that this project is now to use the full name of the file in capital letters followed by 'H'. For example, for a file called "uart_example.h", the header guard is UART_EXAMPLE_H. The exceptions are files that are imported from other projects: - CryptoCell driver - dt-bindings folders - zlib headers Change-Id: I50561bf6c88b491ec440d0c8385c74650f3c106e Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-11-05plat/arm: Support direct Linux kernel boot in AArch32Manish Pandey
This option allows the Trusted Firmware to directly jump to Linux kernel for aarch32 without the need of an intermediate loader such as U-Boot. Similar to AArch64 ARM_LINUX_KERNEL_AS_BL33 only available with RESET_TO_SP_MIN=1 as well as BL33 and DTB are preloaded in memory. Change-Id: I908bc1633696be1caad0ce2f099c34215c8e0633 Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
2018-08-22libc: Fix all includes in codebaseAntonio Nino Diaz
The codebase was using non-standard headers. It is needed to replace them by the correct ones so that we can use the new libc headers. Change-Id: I530f71d9510cb036e69fe79823c8230afe890b9d Acked-by: Sumit Garg <sumit.garg@linaro.org> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-07-11Merge pull request #1473 from robertovargas-arm/misraDimitris Papastamos
Misra
2018-07-11Add end_vector_entry assembler macroRoberto Vargas
Check_vector_size checks if the size of the vector fits in the size reserved for it. This check creates problems in the Clang assembler. A new macro, end_vector_entry, is added and check_vector_size is deprecated. This new macro fills the current exception vector until the next exception vector. If the size of the current vector is bigger than 32 instructions then it gives an error. Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-07-11Add .extab and .exidx sectionsRoberto Vargas
These sections are required by clang when the code is compiled for aarch32. These sections are related to the unwind of the stack in exceptions, but in the way that clang defines and uses them, the garbage collector cannot get rid of them. Change-Id: I085efc0cf77eae961d522472f72c4b5bad2237ab Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-07-11Use ALIGN instead of NEXT in linker scriptsRoberto Vargas
Clang linker doesn't support NEXT. As we are not using the MEMORY command to define discontinuous memory for the output file in any of the linker scripts, ALIGN and NEXT are equivalent. Change-Id: I867ffb9c9a76d4e81c9ca7998280b2edf10efea0 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-07-10Fix MISRA rule 8.4Roberto Vargas
Rule 8.4: A compatible declaration shall be visible when an object or function with external linkage is defined Fixed for: make DEBUG=1 PLAT=juno ARCH=aarch32 AARCH32_SP=sp_min RESET_TO_SP_MIN=1 JUNO_AARCH32_EL3_RUNTIME=1 bl32 Change-Id: I3ac25096b55774689112ae37bdf1222f9a9ecffb Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-06-27TSP: Enable cache along with MMUJeenu Viswambharan
Previously, data caches were disabled while enabling MMU only because of active stack. Now that we can enable MMU without using stack, we can enable both MMU and data caches at the same time. Change-Id: I73f3b8bae5178610e17e9ad06f81f8f6f97734a6 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-06-27DynamIQ: Enable MMU without using stackJeenu Viswambharan
Having an active stack while enabling MMU has shown coherency problems. This patch builds on top of translation library changes that introduces MMU-enabling without using stacks. Previously, with HW_ASSISTED_COHERENCY, data caches were disabled while enabling MMU only because of active stack. Now that we can enable MMU without using stack, we can enable both MMU and data caches at the same time. NOTE: Since this feature depends on using translation table library v2, disallow using translation table library v1 with HW_ASSISTED_COHERENCY. Fixes ARM-software/tf-issues#566 Change-Id: Ie55aba0c23ee9c5109eb3454cb8fa45d74f8bbb2 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-23Rename symbols and files relating to CVE-2017-5715Dimitris Papastamos
This patch renames symbols and files relating to CVE-2017-5715 to make it easier to introduce new symbols and files for new CVE mitigations. Change-Id: I24c23822862ca73648c772885f1690bed043dbc7 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-04-27types: use int-ll64 for both aarch32 and aarch64Masahiro Yamada
Since commit 031dbb122472 ("AArch32: Add essential Arch helpers"), it is difficult to use consistent format strings for printf() family between aarch32 and aarch64. For example, uint64_t is defined as 'unsigned long long' for aarch32 and as 'unsigned long' for aarch64. Likewise, uintptr_t is defined as 'unsigned int' for aarch32, and as 'unsigned long' for aarch64. A problem typically arises when you use printf() in common code. One solution could be, to cast the arguments to a type long enough for both architectures. For example, if 'val' is uint64_t type, like this: printf("val = %llx\n", (unsigned long long)val); Or, somebody may suggest to use a macro provided by <inttypes.h>, like this: printf("val = %" PRIx64 "\n", val); But, both would make the code ugly. The solution adopted in Linux kernel is to use the same typedefs for all architectures. The fixed integer types in the kernel-space have been unified into int-ll64, like follows: typedef signed char int8_t; typedef unsigned char uint8_t; typedef signed short int16_t; typedef unsigned short uint16_t; typedef signed int int32_t; typedef unsigned int uint32_t; typedef signed long long int64_t; typedef unsigned long long uint64_t; [ Linux commit: 0c79a8e29b5fcbcbfd611daf9d500cfad8370fcf ] This gets along with the codebase shared between 32 bit and 64 bit, with the data model called ILP32, LP64, respectively. The width for primitive types is defined as follows: ILP32 LP64 int 32 32 long 32 64 long long 64 64 pointer 32 64 'long long' is 64 bit for both, so it is used for defining uint64_t. 'long' has the same width as pointer, so for uintptr_t. We still need an ifdef conditional for (s)size_t. All 64 bit architectures use "unsigned long" size_t, and most 32 bit architectures use "unsigned int" size_t. H8/300, S/390 are known as exceptions; they use "unsigned long" size_t despite their architecture is 32 bit. One idea for simplification might be to define size_t as 'unsigned long' across architectures, then forbid the use of "%z" string format. However, this would cause a distortion between size_t and sizeof() operator. We have unknowledge about the native type of sizeof(), so we need a guess of it anyway. I want the following formula to always return 1: __builtin_types_compatible_p(size_t, typeof(sizeof(int))) Fortunately, ARM is probably a majority case. As far as I know, all 32 bit ARM compilers use "unsigned int" size_t. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-04-13Fix MISRA rule 8.4 Part 3Roberto Vargas
Rule 8.4: A compatible declaration shall be visible when an object or function with external linkage is defined Fixed for: make DEBUG=1 PLAT=fvp SPD=tspd all Change-Id: I0a16cf68fef29cf00ec0a52e47786f61d02ca4ae Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-04-13Fix MISRA rule 8.3 Part 3Roberto Vargas
Rule 8.3: All declarations of an object or function shall use the same names and type qualifiers Fixed for: make DEBUG=1 PLAT=fvp SPD=tspd all Change-Id: I4e31c93d502d433806dfc521479d5d428468b37c Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-03-21Rename 'smcc' to 'smccc'Antonio Nino Diaz
When the source code says 'SMCC' it is talking about the SMC Calling Convention. The correct acronym is SMCCC. This affects a few definitions and file names. Some files have been renamed (smcc.h, smcc_helpers.h and smcc_macros.S) but the old files have been kept for compatibility, they include the new ones with an ERROR_DEPRECATED guard. Change-Id: I78f94052a502436fdd97ca32c0fe86bd58173f2f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>