Closed Bug 1277011 Opened 8 years ago Closed 8 years ago

Wasm baseline: ARM-32 support

Categories

(Core :: JavaScript Engine: JIT, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla52
Tracking Status
firefox52 --- fixed

People

(Reporter: lth, Assigned: lth)

References

Details

Attachments

(8 files, 7 obsolete files)

(deleted), patch
nbp
: review+
Details | Diff | Splinter Review
(deleted), patch
bbouvier
: review+
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
bbouvier
: review+
Details | Diff | Splinter Review
(deleted), patch
h4writer
: review+
Details | Diff | Splinter Review
(deleted), patch
h4writer
: review+
Details | Diff | Splinter Review
(deleted), patch
h4writer
: review+
Details | Diff | Splinter Review
(deleted), patch
h4writer
: review+
Details | Diff | Splinter Review
Almost none of the platform hooks in the initial wasm baseline compiler (bug 1232205) have working ARM code. This bug is the placeholder for bringing the wasm baseline to parity with x64 and x86. This is a big job, for two reasons. One is that there are many platform hooks. The other is that when we are adding the ARM code we should remove the platform code from the wasm baseline compiler by pushing the abstractions we need into the MacroAssembler layer.
No longer blocks: 1232205
Depends on: 1232205
Depends on: 1278283
Attached patch bug1277011-refactor-remainder.patch (obsolete) (deleted) — Splinter Review
Setting things up: simplify reminderI32.
Assignee: nobody → lhansen
Status: NEW → ASSIGNED
Attached patch bug1277011-use-new-masm-abstractions.patch (obsolete) (deleted) — Splinter Review
Use all the new MASM abstractions in bug 1278283. Once this lands, the ARM dependencies that remain are: - BaselineCanCompile() (which will need to check for HasIDIV()) - Calls - Globals - Heap load/store - Function entry/exit - Tableswitch Not asking for any reviews here, many other things must land first.
Any MIPS changes should wait until this one lands, because the MacroAssembler changes in bug 1278283 will have fairly large impacts in the parts of the file that the MIPS changes would touch, and will also ensure that the MIPS changes would mainly touch files in jit/mips*.
(In reply to Lars T Hansen [:lth] from comment #3) > Any MIPS changes should wait until this one lands, because the > MacroAssembler changes in bug 1278283 will have fairly large impacts in the > parts of the file that the MIPS changes would touch, and will also ensure > that the MIPS changes would mainly touch files in jit/mips*. OK.
Refactoring of some of the ARM code for loading and storing in the asm.js heap.
Attachment #8760851 - Attachment is obsolete: true
Attachment #8760853 - Attachment is obsolete: true
Attachment #8765463 - Flags: review?(nicolas.b.pierron)
Attached patch bug1277011-arm-support.patch (obsolete) (deleted) — Splinter Review
Wasm baseline compiler support for ARM-32. Notable: - The compiler requires SDIV/UDIV, this is probably OK long-term - The compiler requires HW support for aligned load/store, this is temporary - Calls to imports save and restore the GlobalReg and HeapReg, but I'm not actually sure that this is necessary and I'd appreciate feedback.
Attachment #8765465 - Flags: review?(bbouvier)
Comment on attachment 8765463 [details] [diff] [review] bug1277011-abstract-heap-access.patch Review of attachment 8765463 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jit/arm/MacroAssembler-arm.cpp @@ +2121,5 @@ > +MacroAssemblerARMCompat::ma_loadHeapAsmJS(Register ptrReg, int size, bool needsBoundsCheck, > + bool faultOnOOB, FloatRegister output) > +{ > + if (!needsBoundsCheck) { > + VFPRegister vd(output); nit: This is a no-op copy, use "output" in-place of "vd". (same below & in other functions) @@ +2125,5 @@ > + VFPRegister vd(output); > + if (size == 32) > + ma_vldr(vd.singleOverlay(), HeapReg, ptrReg, 0, Assembler::Always); > + else > + ma_vldr(vd, HeapReg, ptrReg, 0, Assembler::Always); nit: This function duplicates code paths only for the ".singleOverlay()" call and for "NaNXXGlobalDataOffset". Factor these out: size_t nanOffset = wasm::NaN64GlobalDataOffset; if (size == 32) { output = output.singleOverlay(); // follow-up: ideally this should bubble up out of this function. nanOffset = wasm::NaN32GlobalDataOffset; }
Attachment #8765463 - Flags: review?(nicolas.b.pierron) → review+
Keywords: leave-open
Comment on attachment 8765465 [details] [diff] [review] bug1277011-arm-support.patch Canceling review for now, rebased patch will be coming.
Attachment #8765465 - Flags: review?(bbouvier)
Attached patch bug1277011-arm-support-v2.patch (obsolete) (deleted) — Splinter Review
Current ARM patch rebased (on top of pending x86 patch, bug 1277008). This could usefully be split into two: one that addresses some general concerns, notably the return value handling, and one that implements ARM support.
Attachment #8765465 - Attachment is obsolete: true
Attached patch bug1277011-cleanup-calls.patch (deleted) — Splinter Review
Calls have evolved a little haphazardly, let's clean them up first so that the ARM patch can be about ARM.
Attachment #8770948 - Flags: review?(bbouvier)
Comment on attachment 8770948 [details] [diff] [review] bug1277011-cleanup-calls.patch Review of attachment 8770948 [details] [diff] [review]: ----------------------------------------------------------------- Looks good. ::: js/src/asmjs/WasmBaselineCompile.cpp @@ +28,1 @@ > * - int64 load and store pre-existing: can remove this line too
Attachment #8770948 - Flags: review?(bbouvier) → review+
Attached patch bug1277011-armsim.patch (obsolete) (deleted) — Splinter Review
This is a tweak to the ARM simulator: it adds "fault handling" also for halfword accesses, and it adds a flag to readW and writeW (for now) so that we can disable unaligned accesses for instructions that don't support it. More could be done here but every little bit helps, I hope.
Attachment #8771371 - Flags: review?(bbouvier)
Attached patch bug1277011-unaligned.patch (deleted) — Splinter Review
WIP: Support unaligned accesses on ARM for asm.js/wasm load and store. This passes all tests and is probably roughly right but let's hold off on the review.
Attached patch bug1277011-arm-support-v3.patch (obsolete) (deleted) — Splinter Review
WIP: ARM support for the baseline compiler. Passes all tests, but holding off on review for now.
Attachment #8767852 - Attachment is obsolete: true
Comment on attachment 8771371 [details] [diff] [review] bug1277011-armsim.patch Review of attachment 8771371 [details] [diff] [review]: ----------------------------------------------------------------- There's an issue with the test and I don't get the names, so I would like another read please. ::: js/src/jit/arm/Simulator-arm.cpp @@ +1506,4 @@ > { > // The regexp engine emits unaligned loads, so we don't check for them here > // like most of the other methods do. > + if ((addr & 3) == 0 || (DontFault && !HasAlignmentFault())) { The LHS of the right sub-condition is a no-op: did you mean f == DontFault? @@ +1531,2 @@ > { > + if ((addr & 3) == 0 || (DontFault && !HasAlignmentFault())) { ditto ::: js/src/jit/arm/Simulator-arm.h @@ +205,5 @@ > }; > > + enum FaultType { > + DoFault, > + DontFault I get really confused with the naming: is it the requirement we're expressing, or the actual behavior that's going to be triggered? Could you name these FaultOnUnaligned and IgnoreUnaligned to make this clearer?
Attachment #8771371 - Flags: review?(bbouvier)
Attached patch bug1277011-armsim-v2.patch (deleted) — Splinter Review
(Yeah, I should have quit a half hour earlier that day.) This fixes the concerns with the previous patch. I could also clean up halfword accesses but opted not to at this time.
Attachment #8771371 - Attachment is obsolete: true
Attachment #8775368 - Flags: review?(bbouvier)
Comment on attachment 8775368 [details] [diff] [review] bug1277011-armsim-v2.patch Review of attachment 8775368 [details] [diff] [review]: ----------------------------------------------------------------- OK, I think I finally understand it, so r=me. But if that is not the case, please re-ask for review for a deeper read. ::: js/src/jit/arm/Simulator-arm.cpp @@ +1506,4 @@ > { > // The regexp engine emits unaligned loads, so we don't check for them here > // like most of the other methods do. > + if ((addr & 3) == 0 || (f == AllowUnaligned && !HasAlignmentFault())) { So we'll have an alignment fault if f == AllowUnaligned && HasAlignmentFault()? Shouldn't this be f == AllowUnaligned || !HasAlignmentFault()? Or am I misunderstanding the meaning of AllowUnaligned again... Thinking about it, this seems to map the semantics of "some memory accesses are allowed to be misaligned, if the processor doesn't always fault on misaligned accesses". Is that it? If so, make it painfully explicit in a comment near UnalignedPolicy, please. @@ +1531,2 @@ > { > + if ((addr & 3) == 0 || (f == AllowUnaligned && !HasAlignmentFault())) { ditto @@ +4321,5 @@ > int r = 0; > while (r < regs) { > uint32_t data[2]; > get_d_register(Vd + r, data); > + writeW(address, data[0], instr, AllowUnaligned); According to ARMv7 reference manual (A3.2.1, unaligned data accesses), it seems that vst1 (resp. vld1 below) can encode an alignment field, which, if set to the non-default alignment value, will trigger an alignment fault all the time. That being said, it doesn't seem this code here even reads this alignment field...
Attachment #8775368 - Flags: review?(bbouvier) → review+
(In reply to Benjamin Bouvier [:bbouvier] from comment #21) > Comment on attachment 8775368 [details] [diff] [review] > bug1277011-armsim-v2.patch > > Review of attachment 8775368 [details] [diff] [review]: > ----------------------------------------------------------------- > > OK, I think I finally understand it, so r=me. But if that is not the case, > please re-ask for review for a deeper read. > > ::: js/src/jit/arm/Simulator-arm.cpp > @@ +1506,4 @@ > > { > > // The regexp engine emits unaligned loads, so we don't check for them here > > // like most of the other methods do. > > + if ((addr & 3) == 0 || (f == AllowUnaligned && !HasAlignmentFault())) { > > So we'll have an alignment fault if f == AllowUnaligned && > HasAlignmentFault()? Shouldn't this be f == AllowUnaligned || > !HasAlignmentFault()? Or am I misunderstanding the meaning of AllowUnaligned > again... > > Thinking about it, this seems to map the semantics of "some memory accesses > are allowed to be misaligned, if the processor doesn't always fault on > misaligned accesses". Is that it? If so, make it painfully explicit in a > comment near UnalignedPolicy, please. Yes, that's it. HasAlignmentFault is really misnamed, it should be named AlwaysFaultOnUnaligned or NeverAllowUnaligned or something like that, "Has" sounds more like a capability than an immutable aspect of the chip's configuration. I'll add some documentation when I land this. > According to ARMv7 reference manual (A3.2.1, unaligned data accesses), it > seems that vst1 (resp. vld1 below) can encode an alignment field, which, if > set to the non-default alignment value, will trigger an alignment fault all > the time. Oh, nice catch. > That being said, it doesn't seem this code here even reads this > alignment field... The irony this week has been that several googlers have told me about "their" ARM simulator that allows them to run ARM code on their desktops. They're astonished when I tell them we've been working on cleaning it up and extending it for a long time already. I will document this problem before landing.
Depends on: 1289054
The big missing piece here is 64-bit support. Supposedly the masm bits are all done, we "just" need to hook them in.
Priority: -- → P2
Attached patch bug1277011-refactor-div.patch (deleted) — Splinter Review
This refactors the I64 divide and modulo operations that were recently extended for x86. (No semantic changes.) There are two purposes here: - centralize common code & comply with existing code layering style - set us up for a new layering pattern that will be pervasive on ARM, namely, choosing code generation strategy (inline or call) by #ifdef in the dispatch routine, not further down in the emitter.
Attachment #8798843 - Flags: review?(hv1989)
This fixes an old bug that was triggered by the x86 code (and uncovered in ARM test runs): Sometimes the join register needs to be reserved across a register allocation because it will be needed subsequently but can't just be allocated. On the 64-bit platforms, reserving a 32-bit register is sufficient to reserve the full 64 bits of that register, but on a 32-bit platform that's not good enough, we must reserve the 64 bits when a 64-bit type is used.
Attachment #8798856 - Flags: review?(hv1989)
Comment on attachment 8798843 [details] [diff] [review] bug1277011-refactor-div.patch Review of attachment 8798843 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/asmjs/WasmBaselineCompile.cpp @@ +165,5 @@ > + > +# define QUOTIENT_I64_CALLOUT > +# define QUOTIENT_U64_CALLOUT > +# define REMAINDER_I64_CALLOUT > +# define REMAINDER_U64_CALLOUT Not sure we need 4 different defines. I don't think we will have only one of those 4. Can we call this "QUOTIENT_REMAINDER_INT64_CALLOUT" ?
Attachment #8798843 - Flags: review?(hv1989) → review+
Comment on attachment 8798856 [details] [diff] [review] bug1277011-joinreg-reserve.patch Review of attachment 8798856 [details] [diff] [review]: ----------------------------------------------------------------- Good catch. I overlooked this. ::: js/src/asmjs/WasmBaselineCompile.cpp @@ +1718,5 @@ > break; > } > } > > + void reserveJoinRegI(ExprType type) { maybeReserveJoinRegI @@ +1725,5 @@ > + else if (type == ExprType::I64) > + needI64(joinRegI64); > + } > + > + void unreserveJoinRegI(ExprType type) { maybeUnreserveJoinRegI
Attachment #8798856 - Flags: review?(hv1989) → review+
(In reply to Hannes Verschore [:h4writer] from comment #30) > Comment on attachment 8798843 [details] [diff] [review] > bug1277011-refactor-div.patch > > Review of attachment 8798843 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: js/src/asmjs/WasmBaselineCompile.cpp > @@ +165,5 @@ > > + > > +# define QUOTIENT_I64_CALLOUT > > +# define QUOTIENT_U64_CALLOUT > > +# define REMAINDER_I64_CALLOUT > > +# define REMAINDER_U64_CALLOUT > > Not sure we need 4 different defines. I don't think we will have only one of > those 4. > Can we call this "QUOTIENT_REMAINDER_INT64_CALLOUT" ? We can absolutely do that, though in reality (on ARM) this just fits the pattern that different operations are implemented differently on different platforms, and it just seemed a shame to couple them more than necessary. (I agree we will not have only one of those four; what I expect is the case where we will have only two, but I don't know if it will be sliced along quotient/remainder or along signed/unsigned.)
(In reply to Hannes Verschore [:h4writer] from comment #31) > Comment on attachment 8798856 [details] [diff] [review] > bug1277011-joinreg-reserve.patch > > > maybeReserveJoinRegI > maybeUnreserveJoinRegI Definitely.
Attached patch bug1277011-refactor-arm.patch (deleted) — Splinter Review
Refactor a little bit of the ARM code generation into the macro-assembler.
Attachment #8800234 - Flags: review?(hv1989)
Attached patch bug1277011-baseline-arm.patch (obsolete) (deleted) — Splinter Review
Snapshot for safekeeping: Baseline compiler fixes to support ARM-32. This passes - on the simulator - all asm.js tests as well all wasm tests *except* non-canonical NaN tests and the grow_heap tests (I think it's just not reloading the HeapReg after the callout). Unaligned loads and stores are working as well as in the Ion compiler, ie, when simulated SEGV handling is disabled the two compilers generate code that crashes at the same spots when loads that pretend to be aligned are actually not.
Comment on attachment 8800234 [details] [diff] [review] bug1277011-refactor-arm.patch Review of attachment 8800234 [details] [diff] [review]: ----------------------------------------------------------------- LGTM ::: js/src/jit/arm/MacroAssembler-arm.cpp @@ +5343,5 @@ > > +void > +MacroAssembler::wasmTruncateDoubleToUInt32(FloatRegister input, Register output, Label* oolEntry) > +{ > + wasmTruncateToInt32(input, output, MIRType::Double, /*isUnsigned=*/true, oolEntry); Nit: some extra spaces: /* isUnsigned = */ true @@ +5349,5 @@ > + > +void > +MacroAssembler::wasmTruncateDoubleToInt32(FloatRegister input, Register output, Label* oolEntry) > +{ > + wasmTruncateToInt32(input, output, MIRType::Double, /*isUnsigned=*/false, oolEntry); Style nit @@ +5355,5 @@ > + > +void > +MacroAssembler::wasmTruncateFloat32ToUInt32(FloatRegister input, Register output, Label* oolEntry) > +{ > + wasmTruncateToInt32(input, output, MIRType::Float32, /*isUnsigned=*/true, oolEntry); Style nit @@ +5361,5 @@ > + > +void > +MacroAssembler::wasmTruncateFloat32ToInt32(FloatRegister input, Register output, Label* oolEntry) > +{ > + wasmTruncateToInt32(input, output, MIRType::Float32, /*isUnsigned=*/false, oolEntry); Style nit
Attachment #8800234 - Flags: review?(hv1989) → review+
Try build is green for the three pending patches.
Depends on: 1311403
Depends on: 1311433
Depends on: 1311419
Attached patch bug1277011-arm-support-v4.patch (deleted) — Splinter Review
Support for 32-bit ARM. This depends on bug 1311433 to apply, and on bug 1311403 to pass the test suite. If I were to engage in self-critique here it would be that we could probably share more of the load and store code with the Ion back-end, but I'm pretty happy with what I have here right now so I decided to leave it as is. Note that this is bug-compatible with the Ion back-end: for known-unaligned loads and stores, heap access information is not inserted for the individual byte loads and stores. When the abstractions in the MacroAssembler are fixed to do that, it should happen for the baseline compiler automatically.
Attachment #8771373 - Attachment is obsolete: true
Attachment #8802860 - Flags: review?(hv1989)
Attachment #8800235 - Attachment is obsolete: true
Comment on attachment 8802860 [details] [diff] [review] bug1277011-arm-support-v4.patch Review of attachment 8802860 [details] [diff] [review]: ----------------------------------------------------------------- The ifdefs are really getting unwieldy. Should we abstract this away in architecture specific files? Nice work! ::: js/src/asmjs/WasmBaselineCompile.cpp @@ +1791,5 @@ > + // See Push() in MacroAssembler-x86-shared.cpp. > + // > + // See Push() in MacroAssembler-arm.cpp as well as > + // VFPRegister::size() in Architecture-arm.h. Unlike x86 > + // we push a single if the register is a single. The original comment was structured a little bit better: // The size computations come from the implementation of Push in MacroAssembler-x86-shared.cpp MacroAssembler-arm-shared.cpp and VFPRegister::size() in Architecture-arm.h. // On arm unlike x86 we push a single for float. @@ +2266,5 @@ > +#elif defined(JS_CODEGEN_ARM) > + loadI64Low(scratch, arg); > + masm.store32(scratch, Address(StackPointer, argLoc.offsetFromArgBase() + INT64LOW_OFFSET)); > + loadI64High(scratch, arg); > + masm.store32(scratch, Address(StackPointer, argLoc.offsetFromArgBase() + INT64HIGH_OFFSET)); If you s/move32/store32/ we can combine the JS_CODEGEN_X86 and JS_CODEGEN_ARM @@ +2445,5 @@ > + masm.bind(&here); > + uint32_t offset = here.offset() - theTable->offset(); > + > + // Read PC+8 > + masm.as_mov(scratch, O2Reg(pc)); please use ma_mov here. That way you don't need O2Reg @@ +2593,5 @@ > masm.xor64(srcDest.reg, srcDest.reg); > + masm.jump(done); > + } else { > + masm.jump(trap(Trap::IntegerOverflow)); > + } hehe, makes more sense indeed. @@ +2852,5 @@ > MOZ_ASSERT(dest.reg.low == eax); > MOZ_ASSERT(dest.reg.high == edx); > masm.cdq(); > +#elif defined(JS_CODEGEN_ARM) > + masm.as_mov(src.reg, O2Reg(dest.reg.low)); ma_mov @@ +3094,5 @@ > #if defined(JS_CODEGEN_X64) > masm.cmpq(rhs.reg.reg, lhs.reg.reg); > masm.emitSet(cond, dest.reg); > +#elif defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_ARM) > + // TODO / OPTIMIZE: This is pretty branchy, we should be able to do better. Do you open a follow bug for this? You can put me as mentor. @@ +3502,5 @@ > + storeF64(access, ptr, src.f64(), tmp1, tmp2); > + break; > + default: > + MOZ_CRASH("Compiler bug: unexpected array type"); > + } this should be in "wasmStore(access, value, dstAddr, tmp1, tmp2)". Ditto for wasmLoad. No need to have the same signature as x86/x64. But we can have similarity with the cases above, which we should do. @@ +3635,5 @@ > + BufferOffset st = > + masm.ma_vstr(src.reg, VFPAddr(ptr.reg, VFPOffImm(0)), Assembler::Always); > + masm.append(access, st.getOffset(), masm.framePushed()); > + } > + } That should move these function inline in wasmStore/wasmLoad or in MacroAssembler.
Attachment #8802860 - Flags: review?(hv1989) → review+
Priority: P2 → P1
(In reply to Hannes Verschore [:h4writer] from comment #45) > Comment on attachment 8802860 [details] [diff] [review] > bug1277011-arm-support-v4.patch Hannes, one question about load() and store() for you among these comments. > > Review of attachment 8802860 [details] [diff] [review]: > ----------------------------------------------------------------- > > The ifdefs are really getting unwieldy. Yes they are! > Should we abstract this away in architecture specific files? I think that will just make it worse (code duplication), but it's worth exploring either that or new abstractions in here that will reduce the ifdeffery. > @@ +3094,5 @@ > > #if defined(JS_CODEGEN_X64) > > masm.cmpq(rhs.reg.reg, lhs.reg.reg); > > masm.emitSet(cond, dest.reg); > > +#elif defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_ARM) > > + // TODO / OPTIMIZE: This is pretty branchy, we should be able to do better. > > Do you open a follow bug for this? You can put me as mentor. So generally there are many TODO / OPTIMIZE points in this file, none of them have a bug yet because it's long-tail stuff of speculative value. This pattern is somewhat documented in the file header... I'll go through them and see if any of them are of obvious enough value to have tracking bugs, you're right that this would be a good candidate. > @@ +3502,5 @@ > > + storeF64(access, ptr, src.f64(), tmp1, tmp2); > > + break; > > + default: > > + MOZ_CRASH("Compiler bug: unexpected array type"); > > + } > > this should be in "wasmStore(access, value, dstAddr, tmp1, tmp2)". Ditto for > wasmLoad. No need to have the same signature as x86/x64. But we can have > similarity with the cases above, which we should do. OK, so the reason I did not want to do that was that I would then feel like I would have to rewrite CodeGenerator-arm.cpp to use these new abstractions, which I did not want to do... Can you live with me filing a followup bug for that work. > > @@ +3635,5 @@ > > + BufferOffset st = > > + masm.ma_vstr(src.reg, VFPAddr(ptr.reg, VFPOffImm(0)), Assembler::Always); > > + masm.append(access, st.getOffset(), masm.framePushed()); > > + } > > + } > > That should move these function inline in wasmStore/wasmLoad or in > MacroAssembler. I disagree - the resulting function is virtually unreadable, and it'll get worse before we're done with optimizations.
Flags: needinfo?(hv1989)
(In reply to Lars T Hansen [:lth] from comment #46) > > this should be in "wasmStore(access, value, dstAddr, tmp1, tmp2)". Ditto for > > wasmLoad. No need to have the same signature as x86/x64. But we can have > > similarity with the cases above, which we should do. > > OK, so the reason I did not want to do that was that I would then feel like > I would have to rewrite CodeGenerator-arm.cpp to use these new abstractions, > which I did not want to do... > > Can you live with me filing a followup bug for that work. Yes > > > > @@ +3635,5 @@ > > > + BufferOffset st = > > > + masm.ma_vstr(src.reg, VFPAddr(ptr.reg, VFPOffImm(0)), Assembler::Always); > > > + masm.append(access, st.getOffset(), masm.framePushed()); > > > + } > > > + } > > > > That should move these function inline in wasmStore/wasmLoad or in > > MacroAssembler. > > I disagree - the resulting function is virtually unreadable, and it'll get > worse before we're done with optimizations. That's sad. But I understand your point. Though it is annoying to see the implementation on x86/x64 nicely abstracted in the macroassemblers and not seeing the same for ARM. I hoped that would help reduce the clutter in that file a little bit. Seems not and in that case I'm fine with this.
Flags: needinfo?(hv1989)
(In reply to Hannes Verschore [:h4writer] from comment #47) > (In reply to Lars T Hansen [:lth] from comment #46) > > > > > > > @@ +3635,5 @@ > > > > + BufferOffset st = > > > > + masm.ma_vstr(src.reg, VFPAddr(ptr.reg, VFPOffImm(0)), Assembler::Always); > > > > + masm.append(access, st.getOffset(), masm.framePushed()); > > > > + } > > > > + } > > > > > > That should move these function inline in wasmStore/wasmLoad or in > > > MacroAssembler. > > > > I disagree - the resulting function is virtually unreadable, and it'll get > > worse before we're done with optimizations. > > That's sad. But I understand your point. Though it is annoying to see the > implementation on x86/x64 nicely abstracted in the macroassemblers and not > seeing the same for ARM. I hoped that would help reduce the clutter in that > file a little bit. Seems not and in that case I'm fine with this. Ah, I misread your earlier comment! Yes, we can move these into the MacroAssembler, just not inline into the (abstracted) function in the MacroAssembler, probably, because that's a mess -- they were inline earlier.
Keywords: leave-open
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla52
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: