| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid undefined behavior for arm64 stemming from 1u << 32 in
loops with upper bound kNumberOfXRegisters.
Create iterators for enumerating bits in an integer either
from high to low or from low to high and use them for
<arch>Context::FillCalleeSaves() on all architectures.
Refactor runtime/utils.{h,cc} by moving all bit-fiddling
functions to runtime/base/bit_utils.{h,cc} (together with
the new bit iterators) and all time-related functions to
runtime/base/time_utils.{h,cc}. Improve test coverage and
fix some corner cases for the bit-fiddling functions.
Bug: 13925192
(cherry picked from commit 80afd02024d20e60b197d3adfbb43cc303cf29e0)
Change-Id: I905257a21de90b5860ebe1e39563758f721eab82
|
|
|
|
|
|
|
|
| |
Also add a precondition similar to the one present in code
generators, regarding static invoke related explicit clinit
check elimination in non-baseline compilations.
Change-Id: I26f4dcb5d02824d7556f90b4b0c85b08b737fa53
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary of high level changes:
- Adds compiler inliner support to identify string init methods
- Adds compiler support (quick & optimizing) with new invoke code path
that calls method off the thread pointer
- Adds thread entrypoints for all string init methods
- Adds map to verifier to log when receiver of string init has been
copied to other registers. used by compiler and interpreter
Change-Id: I797b992a8feb566f9ad73060011ab6f51eb7ce01
|
|
|
|
| |
Change-Id: Ie6aba02f4223b1de02530e1515c63505f37e184c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The intrinsics generally have specialized code and the code for them
may be faster than what can be achieved with inlining. Thus inliner
should skip intrinsics.
At the same time, easy methods are not worth intrinsifying: ie String
length and isEmpty. Those can be handled by inliner with no problem
and can actually lead to better code since call is not kept around
through all of the optimizations.
Change-Id: Iab38e6c33f79efa54d845d4871cf26fa9b235ab0
Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
|
|
|
|
|
| |
Implement most intrinsics for the optimizing compiler for Arm64.
Change-Id: Idb459be09f0524cb9aeab7a5c7fccb1c6b65a707
|
|
Add intrinsics infrastructure to the optimizing compiler.
Add almost all intrinsics supported by Quick to the x86-64 backend.
Further intrinsics require more assembler support.
Change-Id: I48de9b44c82886bb298d16e74e12a9506b8e8807
|