| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
We still support architectures that do not have sdiv.
Issue: https://code.google.com/p/android/issues/detail?id=162257
Change-Id: I6d43620b7599f70a630668791a796a1703b62912
|
|
|
|
| |
Change-Id: Ia540df98755ac493fe61bd63f0bd94f6d97fbb57
|
|\
| |
| |
| |
| |
| |
| | |
* changes:
Forbid the use of shifts in ShifterOperand in Thumb2
Make subs and adds alter flags when rn is an immediate
Inline long shift code
|
| |
| |
| |
| | |
Change-Id: I2848636f892e276507d04f4313987b9f4c80686b
|
| |
| |
| |
| | |
Change-Id: I9a5aa3f2aa8b0a29d7b0f1e5e247397cf8e9e379
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The optimization recognizes the negation pattern generated by 'javac'
and replaces it with a single condition. To this end, boolean values
are now consistently assumed to be represented by an integer.
This is a first optimization which deletes blocks from the HGraph and
does so by replacing the corresponding entries with null. Hence,
existing code can continue indexing the list of blocks with the block
ID, but must check for null when iterating over the list.
Change-Id: I7779da69cfa925c6521938ad0bcc11bc52335583
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This breaks compiling the core image:
Error after BCE: art::SSAChecker: Instruction 219 in block 1 does not dominate use 221 in block 1.
This reverts commit e295e6ec5beaea31be5d7d3c996cd8cfa2053129.
Change-Id: Ieeb48797d451836ed506ccb940872f1443942e4e
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A mechanism is introduced that a runtime method can be called
from code compiled with optimizing compiler to deoptimize into
interpreter. This can be used to establish invariants in the managed code
If the invariant does not hold at runtime, we will deoptimize and continue
execution in the interpreter. This allows to optimize the managed code as
if the invariant was proven during compile time. However, the exception
will be thrown according to the semantics demanded by the spec.
The invariant and optimization included in this patch are based on the
length of an array. Given a set of array accesses with constant indices
{c1, ..., cn}, we can optimize away all bounds checks iff all 0 <= min(ci) and
max(ci) < array-length. The first can be proven statically. The second can be
established with a deoptimization-based invariant. This replaces n bounds
checks with one invariant check (plus slow-path code).
Change-Id: I8c6e34b56c85d25b91074832d13dba1db0a81569
|
|\ |
|
| |
| |
| |
| |
| |
| | |
This reverts commit 09895ebf2d98783e65930a820e9288703bb1a50b.
Change-Id: I7544022d896ef4353bc2cdf4b036403ed20c956d
|
|\| |
|
| |
| |
| |
| | |
Change-Id: I96887c295eb9a23dad4c9cc05d0a0e3ba17f674d
|
| |
| |
| |
| | |
Change-Id: I8d2ab8d956ad0ce313928918c658d49f490ad081
|
| |
| |
| |
| | |
Change-Id: Id9aafcc13c1a085c17ce65d704c67b73f9de695d
|
|/
|
|
|
|
|
|
|
| |
Move the logic of saving/restoring live registers in slow path
in the SlowPathCode method. Also add a RecordPcInfo helper to
SlowPathCode, that will act as the placeholder of saving correct
stack maps.
Change-Id: I25c2bc7a642ef854bbc8a3eb570e5c8c8d2d030c
|
|
|
|
| |
Change-Id: I415b50d58b30cab4ec38077be22373eb9598ec40
|
|
|
|
|
|
|
|
| |
So we should use the result-type instead if the input type
for knowning what instruction to use.
Bug: 19454010
Change-Id: I88782ad27ae8c8e1b7868afede5057d26f14685a
|
|
|
|
|
|
|
|
| |
This works by adding a new instruction (HBoundType) after each `if (a
instanceof ClassA) {}` to bound the type that `a` can take in the True-
dominated blocks.
Change-Id: Iae6a150b353486d4509b0d9b092164675732b90c
|
|
|
|
|
|
|
|
|
|
| |
We used to be forgiving because of HIntConstant(0) also being
used for null. We now create a special HNullConstant for such uses.
Also, we need to run the dead phi elimination twice during ssa
building to ensure the correctness.
Change-Id: If479efa3680d3358800aebb1cca692fa2d94f6e5
|
|
|
|
|
|
| |
Also reserve a D register for temp.
Change-Id: I6584d9005b0f5685c3afcd8e9153b4c87b56aa8e
|
|\ |
|
| |
| |
| |
| | |
Change-Id: Ie2a540ffdb78f7f15d69c16a08ca2d3e794f65b9
|
|\ \ |
|
| |/
| |
| |
| |
| |
| | |
Add arm32 intrinsics to the optimizing compiler.
Change-Id: If4aeedbf560862074d8ee08ca4484b666d6b9bf0
|
|/
|
|
|
|
| |
Avoid suspend checks and stack changes when not needed.
Change-Id: I0fdb31e8c631e99091b818874a558c9aa04b1628
|
|
|
|
|
|
|
|
|
|
| |
The [i, i + 1) interval scheme we chose for representing
lifetime positions is not optimal for doing this optimization.
It however doesn't prevent recognizing a non-split interval
during the TryAllocateFreeReg phase, and try to re-use
its inputs' registers.
Change-Id: I80a2823b0048d3310becfc5f5fb7b1230dfd8201
|
|
|
|
| |
Change-Id: I0b53d63141395e26816d5d2ce3fa6a297bb39b54
|
|
|
|
| |
Change-Id: I044757a2f06e535cdc1480c4fc8182b89635baf6
|
|
|
|
|
|
|
| |
Native and ART do not have the same calling convention for ART,
so we need to adjust blocked and allocated registers.
Change-Id: I606b2620c0e5a54bd60d6100a137c06616ad40b4
|
|
|
|
| |
Change-Id: I7c519b7a828c9891b1141a8e51e12d6a8bc84118
|
|
|
|
|
|
|
|
|
| |
- Share the computation of core_spill_mask and fpu_spill_mask
between backends.
- Remove explicit stack overflow check support: we need to adjust
them and since they are not tested, they will easily bitrot.
Change-Id: I0b619b8de4e1bdb169ea1ae7c6ede8df0d65837a
|
|
|
|
|
|
| |
Will work on other architectures and FP support in other CLs.
Change-Id: I8cef0343eedc7202d206f5217fdf0349035f0e4d
|
|\ |
|
| |
| |
| |
| |
| |
| | |
HNot folds to ~, not !.
Change-Id: I681f968449a2ade7110b2f316146ad16ba5da74c
|
| |
| |
| |
| |
| |
| | |
This reverts commit c399fdc442db82dfda66e6c25518872ab0f1d24f.
Change-Id: I19f8215c4b98f2f0827e04bf7806c3ca439794e5
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
ImplicitNullChecks are recorded only for instructions directly (see NB
below) preceeded by NullChecks in the graph. This way we avoid recording
redundant safepoints and minimize the code size increase.
NB: ParallalelMoves might be inserted by the register allocator between
the NullChecks and their uses. These modify the environment and the
correct action would be to reverse their modification. This will be
addressed in a follow-up CL.
Change-Id: Ie50006e5a4bd22932dcf11348f5a655d253cd898
|
|
|
|
|
|
|
|
| |
Libcore tests fail.
This reverts commit 41aedbb684ccef76ff8373f39aba606ce4cb3194.
Change-Id: I2572f120d4bbaeb7a4d4cbfd47ab00c9ea39ac6c
|
|
|
|
|
|
| |
Enabled on ARM for longs and doubles.
Change-Id: Id8792d08bd7ca9fb049c5db8a40ae694bafc2d8b
|
|
|
|
| |
Change-Id: I52744382a7e3d2c6c11a43e027d87bf43ec4e62b
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- for backends: arm, arm64, x86, x86_64
- fixed parameter passing for CodeGenerator
- 003-omnibus-opcodes test verifies that NullPointerExceptions work as
expected
Change-Id: I1b302acd353342504716c9169a80706cf3aba2c8
|
| |
| |
| |
| |
| |
| |
| | |
Comments were from:
https://android-review.googlesource.com/#/c/121992.
Change-Id: I8c59b30a356d606f12c50d0c8db916295a5c9e13
|
| |
| |
| |
| |
| |
| |
| | |
Hard-float calling convention uses S14 and D7 for argument passing,
so we cannot use them.
Change-Id: I77a2d8c875677640204baebc24355051aa4175fd
|
|/
|
|
|
|
|
| |
The ParallelMoveResolver does not work with pairs. Instead,
decompose the pair into two individual moves.
Change-Id: Ie9d3f0b078cef8dc20640c98b20bb20cc4971a7f
|
|
|
|
|
|
|
|
|
| |
Add intrinsics infrastructure to the optimizing compiler.
Add almost all intrinsics supported by Quick to the x86-64 backend.
Further intrinsics require more assembler support.
Change-Id: I48de9b44c82886bb298d16e74e12a9506b8e8807
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Currently reserve a global register DTMP for these operations.
Change-Id: Ie88b4696af51834492fd062082335bc2e1137be2
|
| |
| |
| |
| | |
Change-Id: I82f51cff87765a3aeeb861d2ae64978f2e762c73
|
|/
|
|
| |
Change-Id: I16d927ee0a0b55031ade4c92c0095fd74e18ed5b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Comment in arm_lir.h says:
* If a 64-bit argument would span the register/memory argument
* boundary, it will instead be fully passed in the frame.
This change implements such logic for all platforms. We still need
to pass the low part in register as well because I haven't ported
the jni compilers (x86 and mips) to it.
Once the jni compilers are updated, we can remove the register
assignment.
Note that this greatly simplifies optimizing's register allocator
by not having to understand a long spanning register and memory.
Change-Id: I59706ca5d47269fc46e5489ac99bd6576e87e7f3
|