aboutsummaryrefslogtreecommitdiffstats
path: root/lib
Commit message (Collapse)AuthorAgeFilesLines
* Merge changes from topic "aa/sel2_support" into integrationOlivier Deprez2019-12-131-0/+10
|\ | | | | | | | | | | * changes: S-EL2 Support: Check for AArch64 Add support for enabling S-EL2
| * S-EL2 Support: Check for AArch64Artsem Artsemenka2019-12-061-1/+7
| | | | | | | | | | | | | | | | Check that entry point information requesting S-EL2 has AArch64 as an execution state during context setup. Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com> Change-Id: I447263692fed6e55c1b076913e6eb73b1ea735b7
| * Add support for enabling S-EL2Achin Gupta2019-12-061-0/+4
| | | | | | | | | | | | | | | | | | | | This patch adds support for enabling S-EL2 if this EL is specified in the entry point information being used to initialise a secure context. It is the caller's responsibility to check if S-EL2 is available on the system before requesting this EL through the entry point information. Signed-off-by: Achin Gupta <achin.gupta@arm.com> Change-Id: I2752964f078ab528b2e80de71c7d2f35e60569e1
* | Merge "libc: add memrchr" into integrationAlexei Fedorov2019-12-112-0/+25
|\ \
| * | libc: add memrchrAmbroise Vincent2019-12-112-0/+25
| |/ | | | | | | | | | | | | | | This function scans a string backwards from the end for the first instance of a character. Change-Id: I46b21573ed25a0ff222eac340e1e1fb93b040763 Signed-off-by: Ambroise Vincent <ambroise.vincent@arm.com>
* / adding support to enable different personality of the same soc.Pankaj Gupta2019-11-263-13/+18
|/ | | | | | | | | | | | | | | | | | | | | | | | Same SoC has different personality by creating different number of: - cores - clusters. As a result, the platform specific power domain tree will be created after identify the personality of the SoC. Hence, platform specific power domain tree may not be same for all the personality of the soc. Thus, psci library code will deduce the 'plat_core_count', while populating the power domain tree topology and return the number of cores. PLATFORM_CORE_COUNT will still be valid for a SoC, such that psci_plat_core_count <= PLATFORM_CORE_COUNT. PLATFORM_CORE_COUNT will continued to be defined by platform to create the data structures. Signed-off-by: Pankaj Gupta <pankaj.gupta@nxp.com> Change-Id: I1f5c47647631cae2dcdad540d64cf09757db7185
* Merge "Coding guideline suggest not to use unsigned long" into integrationSandrine Bailleux2019-11-152-12/+13
|\
| * Coding guideline suggest not to use unsigned longDeepika Bhavnani2019-11-122-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `unsigned long` should be replaced to 1. `unsigned int` or `unsigned long long` - If fixed, based on the architecture AArch32 or AArch64 2. `u_register_t` - If it is supposed to be 32-bit wide in AArch32 and 64-bit wide in AArch64. Translation descriptors are always 32-bit wide, here `uint32_t` is used to describe the `exact size` of translation descriptors instead of `unsigned int` which guarantees minimum 32-bits Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I6a2af2e8b3c71170e2634044e0b887f07a41677e
* | Merge "Fix white space errors + remove #if defined" into integrationSandrine Bailleux2019-11-131-7/+5
|\ \
| * | Fix white space errors + remove #if definedlaurenw-arm2019-10-241-7/+5
| |/ | | | | | | | | | | | | | | Fix a few white space errors and remove #if defined in workaround for N1 Errata 1542419. Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com> Change-Id: I07ac5a2fd50cd63de53c06e3d0f8262871b62fad
* | Merge "Disable stack protection explicitly" into integrationPaul Beesley2019-11-121-1/+3
|\ \ | |/ |/|
| * Disable stack protection explicitlySimon South2019-10-201-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | Explicitly disable stack protection via the "-fno-stack-protector" compiler option when the ENABLE_STACK_PROTECTOR build option is set to "none" (the default). This allows the build to complete without link errors on systems where stack protection is enabled by default in the compiler. Change-Id: I0a676aa672815235894fb2cd05fa2b196fabb972 Signed-off-by: Simon South <simon@simonsouth.net>
* | xlat_table_v2: Fix enable WARMBOOT_ENABLE_DCACHE_EARLY configArtsem Artsemenka2019-10-181-2/+2
|/ | | | | | | | | | | | | | | The WARMBOOT_ENABLE_DCACHE_EARLY allows caches to be turned on early during the boot. But the xlat_change_mem_attributes_ctx() API did not do the required cache maintenance after the mmap tables are modified if WARMBOOT_ENABLE_DCACHE_EARLY is enabled. This meant that when the caches are turned off during power down, the tables in memory are accessed as part of cache maintenance for power down, and the tables are not correct at this point which results in a data abort. This patch removes the optimization within xlat_change_mem_attributes_ctx() when WARMBOOT_ENABLE_DCACHE_EARLY is enabled. Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com> Change-Id: I82de3decba87dd13e9856b5f3620a1c8571c8d87
* Merge "Neoverse N1 Errata Workaround 1542419" into integrationSoby Mathew2019-10-073-1/+113
|\
| * Neoverse N1 Errata Workaround 1542419laurenw-arm2019-10-043-1/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Coherent I-cache is causing a prefetch violation where when the core executes an instruction that has recently been modified, the core might fetch a stale instruction which violates the ordering of instruction fetches. The workaround includes an instruction sequence to implementation defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap handler to execute a TLB inner-shareable invalidation to an arbitrary address followed by a DSB. Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com> Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
* | Merge "Fix the CAS spinlock implementation" into integrationSoby Mathew2019-10-071-35/+18
|\ \
| * | Fix the CAS spinlock implementationSoby Mathew2019-10-041-35/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make the spinlock implementation use ARMv8.1-LSE CAS instruction based on a platform build option. The CAS-based implementation used to be unconditionally selected for all ARM8.1+ platforms. The previous CAS spinlock implementation had a bug wherein the spin_unlock() implementation had an `sev` after `stlr` which is not sufficient. A dsb is needed to ensure that the stlr completes prior to the sev. Having a dsb is heavyweight and a better solution would be to use load exclusive semantics to monitor the lock and wake up from wfe when a store happens to the lock. The patch implements the same. Change-Id: I5283ce4a889376e4cc01d1b9d09afa8229a2e522 Signed-off-by: Soby Mathew <soby.mathew@arm.com> Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
* | | Merge "TF-A: Add support for ARMv8.3-PAuth in BL1 SMC calls and BL2U" into ↵Soby Mathew2019-10-031-2/+26
|\ \ \ | | | | | | | | | | | | integration
| * | | TF-A: Add support for ARMv8.3-PAuth in BL1 SMC calls and BL2UAlexei Fedorov2019-10-031-2/+26
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for ARMv8.3-PAuth in BL1 SMC calls and BL2U image for firmware updates by programming APIAKey_EL1 registers and enabling Pointer Authentication in EL3 and EL1 respectively. Change-Id: I875d952aba8242caf74fb5f4f2d2af6f0c768c08 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
* | | Introducing support for Cortex-A65AEImre Kis2019-10-031-0/+81
| | | | | | | | | | | | | | | Change-Id: I1ea2bf088f1e001cdbd377cbfb7c6a2866af0422 Signed-off-by: Imre Kis <imre.kis@arm.com>
* | | Introducing support for Cortex-A65Imre Kis2019-10-021-0/+81
| |/ |/| | | | | | | Change-Id: I645442d52a295706948e2cac88c36c1a3cb0bc47 Signed-off-by: Imre Kis <imre.kis@arm.com>
* | Merge "Cortex_hercules: Add support for Hercules-AE" into integrationSoby Mathew2019-10-011-0/+100
|\ \
| * | Cortex_hercules: Add support for Hercules-AEArtsem Artsemenka2019-09-301-0/+100
| |/ | | | | | | | | | | | | Not tested on FVP Model. Change-Id: Iedebc5c1fbc7ea577e94142b7feafa5546f1f4f9 Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com>
* | Merge "AArch32: Disable Secure Cycle Counter" into integrationSoby Mathew2019-09-271-4/+22
|\ \ | |/ |/|
| * AArch32: Disable Secure Cycle CounterAlexei Fedorov2019-09-261-4/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes implementation for disabling Secure Cycle Counter. For ARMv8.5 the counter gets disabled by setting SDCR.SCCD bit on CPU cold/warm boot. For the earlier architectures PMCR register is saved/restored on secure world entry/exit from/to Non-secure state, and cycle counting gets disabled by setting PMCR.DP bit. In 'include\aarch32\arch.h' header file new ARMv8.5-PMU related definitions were added. Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
* | Merge "Fix MTE support from causing unused variable warnings" into integrationSoby Mathew2019-09-271-2/+2
|\ \
| * | Fix MTE support from causing unused variable warningsJustin Chadwell2019-09-201-2/+2
| |/ | | | | | | | | | | | | | | | | | | | | assert() calls are removed in release builds, and if that assert call is the only use of a variable, an unused variable warning will be triggered in a release build. This patch fixes this problem when CTX_INCLUDE_MTE_REGS by not using an intermediate variable to store the results of get_armv8_5_mte_support(). Change-Id: I529e10ec0b2c8650d2c3ab52c4f0cecc0b3a670e Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
* / Adding new optional PSCI hook pwr_domain_on_finish_lateMadhukar Pappireddy2019-09-251-1/+9
|/ | | | | | | | | | This PSCI hook is similar to pwr_domain_on_finish but is guaranteed to be invoked with the respective core and cluster are participating in coherency. This will be necessary to safely invoke the new GICv3 API which modifies shared GIC data structures concurrently. Change-Id: I8e54f05c9d4ef5712184c9c18ba45ac97a29eb7a Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
* Merge changes from topic "db/unsigned_long" into integrationSandrine Bailleux2019-09-181-1/+1
|\ | | | | | | | | | | * changes: Unsigned long should not be used as per coding guidelines SCTLR and ACTLR are 32-bit for AArch32 and 64-bit for AArch64
| * SCTLR and ACTLR are 32-bit for AArch32 and 64-bit for AArch64Deepika Bhavnani2019-09-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | AArch64 System register SCTLR_EL1[31:0] is architecturally mapped to AArch32 System register SCTLR[31:0] AArch64 System register ACTLR_EL1[31:0] is architecturally mapped to AArch32 System register ACTLR[31:0]. `u_register_t` should be used when it's important to store the contents of a register in its native size Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I0055422f8cc0454405e011f53c1c4ddcaceb5779
* | Merge "Refactor ARMv8.3 Pointer Authentication support code" into integrationSoby Mathew2019-09-134-257/+321
|\ \ | |/ |/|
| * Refactor ARMv8.3 Pointer Authentication support codeAlexei Fedorov2019-09-134-257/+321
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI. Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
* | Merge "Assert if power level value greater then PSCI_INVALID_PWR_LVL" into ↵Soby Mathew2019-09-131-0/+1
|\ \ | | | | | | | | | integration
| * | Assert if power level value greater then PSCI_INVALID_PWR_LVLDeepika Bhavnani2019-09-091-0/+1
| | | | | | | | | | | | | | | Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I4a496d5a8e7a9a127cd6224c968539eb74932fca
* | | Merge "Unify type of "cpu_idx" across PSCI module." into integrationSoby Mathew2019-09-132-17/+19
|\ \ \ | |_|/ |/| |
| * | Unify type of "cpu_idx" across PSCI module.Deepika Bhavnani2019-09-132-17/+19
| |/ | | | | | | | | | | | | | | | | | | | | | | cpu_idx is used as mix of `unsigned int` and `signed int` in code with typecasting at some places. This change is to unify the cpu_idx as `unsigned int` as underlying API;s `plat_my_core_pos` returns `unsigned int` It was discovered via coverity issue CID 354715 Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I4f0adb0c596ff1177210c5fe803bff853f2e54ce
* | Merge "libc: fix sparse warning for __assert()" into integrationSoby Mathew2019-09-121-4/+5
|\ \
| * | libc: fix sparse warning for __assert()Masahiro Yamada2019-09-111-4/+5
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | Sparse warns this: lib/libc/assert.c:29:6: error: symbol '__assert' redeclared with different type (originally declared at include/lib/libc/assert.h:36) - different modifiers Add __dead2 to match the header declaration and C definition. I also changed '__dead2 void' to 'void __dead2' for the consistency with other parts. Change-Id: Iefa4f0e787c24fa7e7e499d2e7baf54d4deb49ef Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
* | Merge changes from topic "jc/mte_enable" into integrationSoby Mathew2019-09-122-8/+42
|\ \ | | | | | | | | | | | | | | | * changes: Add documentation for CTX_INCLUDE_MTE_REGS Enable MTE support in both secure and non-secure worlds
| * | Enable MTE support in both secure and non-secure worldsJustin Chadwell2019-09-092-8/+42
| |/ | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the new Memory Tagging Extension arriving in ARMv8.5. MTE support is now enabled by default on systems that support at EL0. To enable it at ELx for both the non-secure and the secure world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving and restoring when necessary in order to prevent register leakage between the worlds. Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
* / Zeus: apply the MSR SSBS instructionJohn Tsichritzis2019-09-111-1/+11
|/ | | | | | | | | Zeus supports the SSBS mechanism and also the new MSR instruction to immediately apply the mitigation. Hence, the new instruction is utilised in the Zeus-specific reset function. Change-Id: I962747c28afe85a15207a0eba4146f9a115b27e7 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
* Merge "AArch64: Disable Secure Cycle Counter" into integrationPaul Beesley2019-08-232-39/+96
|\
| * AArch64: Disable Secure Cycle CounterAlexei Fedorov2019-08-212-39/+96
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes an issue when secure world timing information can be leaked because Secure Cycle Counter is not disabled. For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD bit on CPU cold/warm boot. For the earlier architectures PMCR_EL0 register is saved/restored on secure world entry/exit from/to Non-secure state, and cycle counting gets disabled by setting PMCR_EL0.DP bit. 'include\aarch64\arch.h' header file was tided up and new ARMv8.5-PMU related definitions were added. Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
* | Merge "Fix for N1 1043202 Errata Workaround" into integrationAlexei Fedorov2019-08-201-0/+1
|\ \
| * | Fix for N1 1043202 Errata Workaroundlaurenw-arm2019-08-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ISB instruction was removed from the N1 1043202 Errata Workaround [1], this fix is adding the ISB instruction back in. [1] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.pjdoc-466751330-10325/index.html Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com> Change-Id: I74eac7f6ad38991c36d423ad6aa44558033ad388
* | | Merge "Coverity fix: Remove GGC ignore -Warray-bounds" into integrationPaul Beesley2019-08-201-11/+11
|\ \ \
| * | | Coverity fix: Remove GGC ignore -Warray-boundsDeepika Bhavnani2019-08-161-11/+11
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCC diagnostics were added to ignore array boundaries, instead of ignoring GCC warning current code will check for array boundaries and perform and array update only for valid elements. Resolves: `CID 246574` `CID 246710` `CID 246651` Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I7530ecf7a1707351c6ee87e90cc3d33574088f57
* / / FVP_Base_AEMv8A platform: Fix cache maintenance operationsAlexei Fedorov2019-08-161-5/+33
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes FVP_Base_AEMv8A model hang issue with ARMv8.4+ with cache modelling enabled configuration. Incorrect L1 cache flush operation to PoU, using CLIDR_EL1 LoUIS field, which is required by the architecture to be zero for ARMv8.4-A with ARMv8.4-S2FWB feature is replaced with L1 to L2 and L2 to L3 (if L3 is present) cache flushes. FVP_Base_AEMv8A model can be configured with L3 enabled by setting `cluster0.l3cache-size` and `cluster1.l3cache-size` to non-zero values, and presence of L3 is checked in `aem_generic_core_pwr_dwn` function by reading CLIDR_EL1.Ctype3 field value. Change-Id: If3de3d4eb5ed409e5b4ccdbc2fe6d5a01894a9af Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
* | Merge changes from topic "jc/coverity-fixes" into integrationPaul Beesley2019-08-133-51/+6
|\ \ | |/ |/| | | | | | | | | | | * changes: Fix Coverity #261967, Infinite loop Fix Coverity #343017, Missing unlock Fix Coverity #343008, Side affect in assertion Fix Coverity #342970, Uninitialized scalar variable
| * Fix Coverity #261967, Infinite loopJustin Chadwell2019-08-061-47/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Coverity has identified that the __aeabi_imod function will loop forever if the denominator is not a power of 2, which is probably not the desired behaviour. The functions in the rest of the file are compiler implementations of division if ARMv7 does not implement division which is permitted by the spec. However, while most of the functions in the file are documented and referenced in other places online, __aeabi_uimod and __aeabi_imod are not. For this reason, these functions have been removed from the code base, which also removes the Coverity error. Change-Id: I20066d72365329a8b03a5536d865c4acaa2139ae Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>