Open Bug 1848595 Opened 1 year ago Updated 1 year ago

For-in enumeration is slow for indices outside the static string table

Categories

(Core :: JavaScript Engine, task, P5)

task

Tracking

()

People

(Reporter: mayankleoboy1, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(2 files)

The original codepen demo is https://codepen.io/timseverien/pen/rNNrezm . This is probably mostly a gfx/canvas demo.

However, in the JS part, if you change :-
Original : const PATTERN_SIZE = 4;
Change to : const PATTERN_SIZE = 512;

Now the demo appears to spend most of the time in JS (around GC ?).

Nightly: https://share.firefox.dev/458cN8P
Chrome: https://share.firefox.dev/456T9KA

The ususal caveats apply -
1.The modified demo is probably a stress test/synthetic test and totally not representative of real world code
2. I dont know if its correct to compare the total time spent in Chrome vs Nightly here as GC behaviours are different between the browsers.

Feel free to INVALID.

Attached file Modified_Standalone_Testcase.html (deleted) —
Attached file about:support (deleted) —

The NativeIterator constructor appears to be allocating a string for the name of every indexed property in an array when we iterate it with a for in loop.

This may be related to the changes in bug 1799025.

Flags: needinfo?(iireland)

I think this would have been just as bad prior to bug 1799025, but yeah, that seems suboptimal.

We don't cache the iterator because there are indexed properties. When we create the NativeIterator, we call IdToString to convert the PropertyKey into a string. We have a static table mapping small integers to strings, but that only goes up to 256. I suspect that V8 has more static strings than we do, and that's why they're not hitting a cliff.

I haven't dug into the demo to figure out exactly how high the indices go. Broadly speaking, I think the answer here is "don't use a for-in loop on a big array". We could improve this particular demo by increasing the size of our table to trade off memory for speed, but this synthetic benchmark isn't sufficient motivation.

Poking around in the V8 code, I see that they use a dynamic hash table instead of a static array, which avoids the perf cliff in exchange for a bit of memory and worse happy-path performance.

I don't think this is worth putting a lot of effort into until we get more of an indication that it matters in real code.

Severity: -- → S4
Flags: needinfo?(iireland)
Priority: -- → P5
Summary: Nightly is ~2x slower than Chrome on a modified Codepen demo (and appears to spend tons of time around GC - full_cell_ptr_str_buffer) → For-in enumeration is slow for indices outside the static string table

(In reply to Iain Ireland [:iain] from comment #4)

When we create the NativeIterator, we call IdToString to convert the PropertyKey into a string.

This is probably a dumb question, but could we store those properties as Ids?

We need them as strings eventually, because that's what a for-in loop emits. We could store them as ids and delay the conversion until the last minute, but in the common case we create the NativeIterator once and then cache it and reuse it for every object with the same shape, so the status quo is more efficient.

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: