| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizing + quick tests are passing, devices boot.
TODO: Test and fix bugs in mips64.
Saves 16 bytes per most ArtMethod, 7.5MB reduction in system PSS.
Some of the savings are from removal of virtual methods and direct
methods object arrays.
Bug: 19264997
(cherry picked from commit e401d146407d61eeb99f8d6176b2ac13c4df1e33)
Change-Id: I622469a0cfa0e7082a2119f3d6a9491eb61e3f3d
Fix some ArtMethod related bugs
Added root visiting for runtime methods, not currently required
since the GcRoots in these methods are null.
Added missing GetInterfaceMethodIfProxy in GetMethodLine, fixes
--trace run-tests 005, 044.
Fixed optimizing compiler bug where we used a normal stack location
instead of double on ARM64, this fixes the debuggable tests.
TODO: Fix JDWP tests.
Bug: 19264997
Change-Id: I7c55f69c61d1b45351fd0dc7185ffe5efad82bd3
ART: Fix casts for 64-bit pointers on 32-bit compiler.
Bug: 19264997
Change-Id: Ief45cdd4bae5a43fc8bfdfa7cf744e2c57529457
Fix JDWP tests after ArtMethod change
Fixes Throwable::GetStackDepth for exception event detection after
internal stack trace representation change.
Adds missing ArtMethod::GetInterfaceMethodIfProxy call in case of
proxy method.
Bug: 19264997
Change-Id: I363e293796848c3ec491c963813f62d868da44d2
Fix accidental IMT and root marking regression
Was always using the conflict trampoline. Also included fix for
regression in GC time caused by extra roots. Most of the regression
was IMT.
Fixed bug in DumpGcPerformanceInfo where we would get SIGABRT due to
detached thread.
EvaluateAndApplyChanges:
From ~2500 -> ~1980
GC time: 8.2s -> 7.2s due to 1s less of MarkConcurrentRoots
Bug: 19264997
Change-Id: I4333e80a8268c2ed1284f87f25b9f113d4f2c7e0
Fix bogus image test assert
Previously we were comparing the size of the non moving space to
size of the image file.
Now we properly compare the size of the image space against the size
of the image file.
Bug: 19264997
Change-Id: I7359f1f73ae3df60c5147245935a24431c04808a
[MIPS64] Fix art_quick_invoke_stub argument offsets.
ArtMethod reference's size got bigger, so we need to move other args
and leave enough space for ArtMethod* and 'this' pointer.
This fixes mips64 boot.
Bug: 19264997
Change-Id: I47198d5f39a4caab30b3b77479d5eedaad5006ab
|
|
|
|
| |
Change-Id: I7ede0f59d5109644887bf5d39201d4e1bf043f34
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The algorithm of ParallelMoveResolverNoSwap() is almost the same with
ParallelMoveResolverWithSwap(), except the way we resolve the circular
dependency. NoSwap() uses additional scratch register to resolve the
circular dependency. For example, (0->1) (1->2) (2->0) will be performed
as (2->scratch) (1->2) (0->1) (scratch->0).
On architectures without swap register support, NoSwap() can reduce the
number of moves from 3x(N-1) to (N+1) when there is circular dependency
with N moves.
And also, NoSwap() algorithm does not depend on architecture register
layout information, which means it can support register pairs on arm32
and X/W, D/S registers on arm64 without additional modification.
Change-Id: Idf56bd5469bb78c0e339e43ab16387428a082318
|
|/
|
|
|
|
|
|
| |
Code generation for HInstanceFieldGet, HInstanceFieldSet,
HStaticFieldGet, and HStaticFieldSet are refactored to follow the
structure used for other backends.
Change-Id: I34a3bd17effa042238c6bf199848cbc2ec26ac5d
|
|
|
|
|
|
|
|
| |
It also clean up build/remove frame used by JNI compiler and generates
stp/ldp instead of str/ldr. Also x19 has been unblocked in both quick and
optimizing compiler.
Change-Id: Idbeac0942265f493266b2ef9b7a65bb4054f0e2d
|
|
|
|
|
|
| |
CFI is necessary for stack unwinding in gdb, lldb, and libunwind.
Change-Id: I1a3480e3a4a99f48bf7e6e63c4e83a80cfee40a2
|
|
|
|
|
|
| |
This reverts commit 0ba627337274ccfb8c9cb9bf23fffb1e1b9d1430.
Change-Id: I1ca10d15bbb49897a0cf541ab160431ec180a006
|
|
|
|
|
|
|
| |
Update VIXL's interface to VIXL 1.9.
Change-Id: Iebae947539cbad65488b7195aaf01de284b71cbb
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
|
|
|
| |
Change-Id: Ia540df98755ac493fe61bd63f0bd94f6d97fbb57
|
|
|
|
|
|
|
|
|
|
| |
This breaks compiling the core image:
Error after BCE: art::SSAChecker: Instruction 219 in block 1 does not dominate use 221 in block 1.
This reverts commit e295e6ec5beaea31be5d7d3c996cd8cfa2053129.
Change-Id: Ieeb48797d451836ed506ccb940872f1443942e4e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A mechanism is introduced that a runtime method can be called
from code compiled with optimizing compiler to deoptimize into
interpreter. This can be used to establish invariants in the managed code
If the invariant does not hold at runtime, we will deoptimize and continue
execution in the interpreter. This allows to optimize the managed code as
if the invariant was proven during compile time. However, the exception
will be thrown according to the semantics demanded by the spec.
The invariant and optimization included in this patch are based on the
length of an array. Given a set of array accesses with constant indices
{c1, ..., cn}, we can optimize away all bounds checks iff all 0 <= min(ci) and
max(ci) < array-length. The first can be proven statically. The second can be
established with a deoptimization-based invariant. This replaces n bounds
checks with one invariant check (plus slow-path code).
Change-Id: I8c6e34b56c85d25b91074832d13dba1db0a81569
|
|
|
|
| |
Change-Id: Id9aafcc13c1a085c17ce65d704c67b73f9de695d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement remaining explicit memory barrier code paths and temporarily
enable the use of explicit memory barriers for testing.
This CL also enables the use of instruction set features in the ARM64
backend. kUseAcquireRelease has been replaced with PreferAcquireRelease(),
which for now is statically set to false (prefer explicit memory barriers).
Please note that we still prefer acquire-release for the ARM64 Optimizing
Compiler, but we would like to exercise the explicit memory barrier code
path too.
Change-Id: I84e047ecd43b6fbefc5b82cf532e3f5c59076458
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
|
|
|
|
|
|
| |
When a block branches to a non-following block, but blocks
in-between do branch to it, we can avoid doing the branch.
Change-Id: I9b343f662a4efc718cd4b58168f93162a24e1219
|
|
|
|
|
|
|
|
| |
For now we block kQuickSuspendRegister - x19, since Quick and the runtime
use this as a suspend counter register.
Change-Id: I090d386670e81e7924e4aa9a3864ef30d0580a30
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
|
|
|
| |
Change-Id: I044757a2f06e535cdc1480c4fc8182b89635baf6
|
|
|
|
|
|
| |
Implement most intrinsics for the optimizing compiler for Arm64.
Change-Id: Idb459be09f0524cb9aeab7a5c7fccb1c6b65a707
|
|
|
|
|
|
|
|
|
| |
- Share the computation of core_spill_mask and fpu_spill_mask
between backends.
- Remove explicit stack overflow check support: we need to adjust
them and since they are not tested, they will easily bitrot.
Change-Id: I0b619b8de4e1bdb169ea1ae7c6ede8df0d65837a
|
|
|
|
|
|
| |
Will work on other architectures and FP support in other CLs.
Change-Id: I8cef0343eedc7202d206f5217fdf0349035f0e4d
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ImplicitNullChecks are recorded only for instructions directly (see NB
below) preceeded by NullChecks in the graph. This way we avoid recording
redundant safepoints and minimize the code size increase.
NB: ParallalelMoves might be inserted by the register allocator between
the NullChecks and their uses. These modify the environment and the
correct action would be to reverse their modification. This will be
addressed in a follow-up CL.
Change-Id: Ie50006e5a4bd22932dcf11348f5a655d253cd898
|
|
|
|
|
|
|
|
|
| |
- for backends: arm, arm64, x86, x86_64
- fixed parameter passing for CodeGenerator
- 003-omnibus-opcodes test verifies that NullPointerExceptions work as
expected
Change-Id: I1b302acd353342504716c9169a80706cf3aba2c8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current stack frame calculation assumes that each live register to
be saved/restored has the word size of the machine. This fails for X86,
where a double in an XMM register takes up 8 bytes. Change the
calculation to keep track of the number of core registers and number of
fp registers to handle this distinction.
This is slightly pessimal, as the registers may not be active at the
same time, but the only way to handle this would be to allocate both
classes of registers simultaneously, or remember all the active
intervals, matching them up and compute the size of each safepoint
interval.
Change-Id: If7860aa319b625c214775347728cdf49a56946eb
Signed-off-by: Mark Mendell <mark.p.mendell@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The basic approach is:
- An instruction that needs two registers gets two intervals.
- When allocating the low part, we also allocate the high part.
- When splitting a low (or high) interval, we also split the high
(or low) equivalent.
- Allocation follows the (S/D register) requirement that low
registers are always even and the high equivalent is low + 1.
Change-Id: I06a5148e05a2ffc7e7555d08e871ed007b4c2797
|
|
|
|
|
|
|
|
| |
Add support for rem-float, rem-double and volatile memory accesses
using acquire-release and memory barriers.
Change-Id: I96a24dff66002c3b772c3d8e6ed792e3cb59048a
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
|
|
|
|
|
|
|
|
| |
X64 has one libcore test failing, and codegen_test on
arm is failing.
This reverts commit 6004796d6c630696127df2494dcd4f30d1367a34.
Change-Id: I20e00431fa18e11ce4c0cb6fffa91977fa8e9b4f
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change builds on:
https://android-review.googlesource.com/#/c/118983/
- Also fix x86_64 assembler bug triggered by this change.
- Fix (and improve) x86's backend byte register usage.
- Fix a bug in baseline register allocator: a fixed
out register must prevent inputs from allocating it.
Change-Id: I4883862e29b4e4b6470f1823cf7eab7e7863d8ad
|
|
|
| |
Change-Id: Idc6e84eee66170de4a9c0a5844c3da038c083aa7
|
|
|
|
|
|
|
| |
Add support for more IRs and update others.
Change-Id: Iae1bef01dc3c0d238a46fbd2800e71c38288b1d2
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
|
|
|
|
|
|
|
| |
This patch updates the interface to VIXL 1.7 and enables the debug version of
VIXL when ART is built in debug mode.
Change-Id: I443fb941bec3cffefba7038f93bb972e6b7d8db5
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
|
|
|
|
|
|
|
| |
These constants were defined prior to k{InstructionSet}PointerSize. So
use them consistently in optimizing as a first step. We can discuss
whether we should remove them in a second step.
Change-Id: If129de1a3bb8b65f8d9c816a8ad466815fb202e6
|
|
|
|
| |
Change-Id: I4b6425135d1af74912a206411288081d2516f8bf
|
|
|
|
|
|
|
|
| |
The two locations of the index and length could overlap,
so we need a parallel move. Also factorize the code for
doing a parallel move based on two locations.
Change-Id: Iee8b3459e2eed6704d45e9a564fb2cd050741ea4
|
|
|
| |
Change-Id: I781ddcbc61eb2b04ae80b1c7697e1ed5694bd5b9
|
|
|
| |
Change-Id: I0d97ab0f5ab770fee62c819505743febbce8835e
|
|
|
|
|
|
|
|
| |
- We currently don't run optimizations in the presence of a try/catch.
- We therefore implement Quick's mapping table.
- Also fix a missing null check on array-length.
Change-Id: I6917dfcb868e75c1cf6eff32b7cbb60b6cfbd68f
|
|
|
|
|
|
|
|
|
|
|
| |
Fix associated errors about unused paramenters and implict sign conversions.
For sign conversion this was largely in the area of enums, so add ostream
operators for the effected enums and fix tools/generate-operator-out.py.
Tidy arena allocation code and arena allocated data types, rather than fixing
new and delete operators.
Remove dead code.
Change-Id: I5b433e722d2f75baacfacae4d32aef4a828bfe1b
|
|
The ARM64 port uses VIXL for code generation, to which it defers work
like label binding and branch resolving, register type coherency
checking, and immediate values handling.
Change-Id: I0a44508c0c991f472a63e67b3469cdd878fe1a68
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
Signed-off-by: Alexandre Rames <alexandre.rames@arm.com>
|