Closed
Bug 1285267
Opened 8 years ago
Closed 2 years ago
Consider different heuristics for assigning groups to objects
Categories
(Core :: JavaScript Engine, defect, P3)
Core
JavaScript Engine
Tracking
()
RESOLVED
INVALID
People
(Reporter: jandem, Unassigned)
References
(Blocks 1 open bug)
Details
(Keywords: triage-deferred)
Attachments
(1 file)
(deleted),
patch
|
Details | Diff | Splinter Review |
Right now we can allocate groups based on allocation site and this works fine in a lot of cases, but there are also a lot of cases where it breaks down.
For instance, consider this:
f({x: 1, y: 2});
f({x: 2, y: 3});
f({x: 3, y: 4});
The 3 objects will have different groups (because different allocation sites) and this makes type information for |f| polymorphic if there are a lot of these objects.
The same holds for array literals that we know contain only integers, etc.
It seems like we could use heuristics to assign groups based on property names/types (if we know the properties, like in this case), or maybe have a way to merge groups later (but that seems more complicated).
I think it's worth experimenting with different approaches here. Ideally we'd add a mechanism to measure typeset sizes, so we can easily compare how different approaches perform on various workloads.
Reporter | ||
Comment 1•8 years ago
|
||
Brian, any thoughts/ideas on this? It's a problem that shows up on real code (the Cheerp devs are seeing this for instance).
Flags: needinfo?(bhackett1024)
Reporter | ||
Comment 2•8 years ago
|
||
A related problem is that for things like |new Object()| we don't use allocation sites but can easily conflate type information (because same Class + proto).
I wonder if we could generalize the preliminary object system we use for constructors. It'd be really nice if we could give objects some temporary/placeholder group and then after we know a bit more about the properties etc, we decide which group to give to the object(s). Either a new group or a pre-existing one with similar type information.
It's a hard problem, but I think there must be something we can do here with some work.
Comment 3•8 years ago
|
||
Generalizing the preliminary objects system is I think the best way forward here. Without some knowledge of how objects created at a given site are used, there isn't a way to determine whether it would be better to group the objects in comment 0 together, or keep them in separate groups.
The problem with using a placeholder group is that type sets where that object appears in (if it escapes into the heap or is passed to a function, say) will need some representation for the object which distinguishes it from other objects and can be tied to the final group which the object is given.
One option would be to give all objects groups based on their allocation site (this ought to be happening to 'new Object()' fwiw, though that probably broke at some point) and after the preliminary objects have been allocated the objects are assigned a new group based on the properties/types they end up having. This would still require care to avoid making |f| polymorphic before the group merging occurs, though.
A more extreme option would be to make all objects singletons and group them together later if lots of similar objects keep getting allocated at the same point and going through the same pathways (given similar properties, not becoming prototypes, etc.). This could require a lot of work to do well but would remove a lot of heuristics around the assigning of groups/singleton-ness to objects, along with patchwork for when those heuristics misfire (like the group->singleton transformation we do sometimes).
Flags: needinfo?(bhackett1024)
Reporter | ||
Comment 4•8 years ago
|
||
(In reply to Brian Hackett (:bhackett) from comment #3)
> A more extreme option would be to make all objects singletons and group them
> together later if lots of similar objects keep getting allocated at the same
> point and going through the same pathways (given similar properties, not
> becoming prototypes, etc.). This could require a lot of work to do well but
> would remove a lot of heuristics around the assigning of
> groups/singleton-ness to objects, along with patchwork for when those
> heuristics misfire (like the group->singleton transformation we do
> sometimes).
I like this option a lot. What do you think of the following scheme:
(1) Initially, allocate all objects as singletons, but remember their allocation site (script + pc mostly).
(2) When a TypeSet is about to 'overflow', do the following:
(a) For each JSObject in the set:
- If the object is no longer a singleton at this point, remove the object and add the group instead.
- Else, see if a "similar" group exists. For now let's say based on shape + proto + clasp, maybe also element type if array (lots of room for improvement here).
* If yes, give the singleton object that group. Remember {allocation site => group}.
* If no, we have two options: create a new group or keep the object as singleton.
(b) If the TypeSet is still about to overflow, the objects are too different and we mark it as containing unknown objects.
Because step 2 is pretty expensive, we want to ensure we don't get stuck in step 2a for every object we allocate. Therefore:
(3) After an allocation site has allocated X objects, look at the groups assigned to objects allocated here:
- 1 group: monomorphic case, from now on assign this group to any objects allocated here.
- 0 groups: probably keep allocating singletons in this case, the objects are likely very different.
- 2+ groups: use a generic group for new objects allocated here.
-------
I think this solves a lot of problems:
(1) The objects in the example in comment 0 now all get the same group.
(2) When we create 1 or a few objects of a kind, we can optimize them as singletons. A typical use case is the following, where we really want the GameEngine object to be a singleton:
function GameEngine() { ... }
function init() {
engine = new GameEngine();
}
(3) We have this thing where we give large typed arrays a singleton type, mostly for the "heap arrays" used by asm.js and Mandreel. That hack is no longer necessary as the typed arrays will now always get a singleton type and only when we allocate many of them we will group them (I've seen tests where the current singleton-based-on-size optimization backfires).
Brian, can you think of any obvious perf/correctness issues here?
Flags: needinfo?(bhackett1024)
Comment 5•8 years ago
|
||
(In reply to Jan de Mooij [:jandem] from comment #4)
> I like this option a lot. What do you think of the following scheme:
>
> (1) Initially, allocate all objects as singletons, but remember their
> allocation site (script + pc mostly).
>
> (2) When a TypeSet is about to 'overflow', do the following:
> (a) For each JSObject in the set:
> - If the object is no longer a singleton at this point, remove the
> object and add the group instead.
> - Else, see if a "similar" group exists. For now let's say based on
> shape + proto + clasp, maybe also element type if array (lots of room for
> improvement here).
> * If yes, give the singleton object that group. Remember
> {allocation site => group}.
> * If no, we have two options: create a new group or keep the
> object as singleton.
> (b) If the TypeSet is still about to overflow, the objects are too
> different and we mark it as containing unknown objects.
>
> Because step 2 is pretty expensive, we want to ensure we don't get stuck in
> step 2a for every object we allocate. Therefore:
>
> (3) After an allocation site has allocated X objects, look at the groups
> assigned to objects allocated here:
> - 1 group: monomorphic case, from now on assign this group to any
> objects allocated here.
> - 0 groups: probably keep allocating singletons in this case, the
> objects are likely very different.
> - 2+ groups: use a generic group for new objects allocated here.
Hi, I have a couple concerns:
- (2) is pretty fragile. If a type set first overflows when one object is in the midst of being initialized (as seems likely to happen) then the objects in the set will not have uniform shapes and might not be given uniform groups. That could be hacked around but it wouldn't be ideal.
- The 2+ groups case in (3) isn't great; we won't be able to do any better (and maybe will do worse) with allocation sites that create objects that flow down different paths, getting different properties and used in different ways.
To think about the different options here it might help to think about the conceptual framework surrounding object group transitions. When an object changes its group pointer, whether between singleton and non-singleton or from one group to another, that transition represents an edge in some graph of possible transitions. There are a couple rules for this graph: (1) type sets containing one node also contain its descendants in this graph, and (2) property type updates to one node should also be reflected in that node's descendants.
Currently we allow a few different transitions. With prototype mutations and brain transplants we allow transitions from one arbitrary group to another, and mark both groups as unknown. With the preliminary object analysis and with unboxed object -> native transforms we allow transition from one specific group to another, but watch out for those groups when updating type information to follow the rules above. When converting an object into a singleton we mark the source group as having unknown properties to capture (1), and avoid needing to enforce (2) since this is a one time transition.
Anyways, then, it seems to me like there are two main options for how we can improve the assignment of groups to objects for this bug (and for future work too I guess):
A. Focus on singleton -> group transitions only.
This is what your proposed solution does and I think it's the better of the two strategies. I think to address my concern about (2) above it would be better to make the point of allocation the focus of the analysis, rather than the point where the type set gets filled. When enough objects have been allocated at a site, look at the individual objects that have been allocated there. Filter out objects that "deserve" to remain singletons (e.g. they are prototypes of other objects, I don't know if other heuristics should go here), and then if enough objects remain figure out a group for the objects, using a similar one if possible per (2)(a) above, and use that group for all future allocations at the site.
The main difference here vs. your proposal is that we would be looking at all the objects allocated at the site together when deciding on a group, rather than looking at each object individually. With the limited forms of group transitions possible with this strategy we just can't do a good job with polymorphic code, but see (B) below for how that could be improved with a more complicated analysis.
B. Focus on more generic group -> group transitions.
So for polymorphic code, suppose there's an allocation site where half the time the resulting object gets property set A and half the time the object gets property set B. The A and B objects flow down different paths and are always used in monomorphic code except at creation time.
If we wanted to do a good job with this code, we would need to give the objects a generic group when they were first created. But that initial group shouldn't have any property type information associated with it; instead, we would need to determine the points where the objects have flowed down path A or B and transition them to new groups Ga or Gb before properties have been assigned to them (if there are property types in the initial group that apply to A or B but not the other then per the (2) group transition rule we would pollute groups Ga and/or Gb with those types).
Identification of these points is entirely speculative; I haven't actually seen JS where we have this problem. I think the reason is that due to how JS is generally written (at least in benchmarks, blech), objects used in different ways will have different prototypes, and those different prototypes will lead them to have different groups and allow us to differentiate them properly.
So as far as I know going to this level of complexity is not necessary, but this is something to think about if you've seen polymorphic code where TI does horribly or if you see it in the future.
Flags: needinfo?(bhackett1024)
Comment 6•8 years ago
|
||
I've a work-in-progress patch that implements Brian's "B. Focus on more generic group -> group transitions." option mentioned in the previous comment. Note that this implementation is also based on v8's map transitions design: http://jayconrod.com/posts/52/a-tour-of-v8-object-representation (where a Map in v8 is similar to what a Shape is in SpiderMonkey).
The idea is to have 1 Shape per ObjectGroup. In the future, this would allow removal of the Shape pointer in ShapedObject (but that will be part of another bug report). Computed and numeric property names should not be tracked. If you delete properties from an object, it will use dictionary mode, where properties are stored in a hash table. This prevents an absurd number of ObjectGroups and Shapes from being allocated.
|ObjectGroupTree::find(group, id, type)| is used to find the new object group given the current object group (=group), the property name (=id) and the type of the property (=type).
|ObjectGroupTree::record(cx, group, id, type)| is used to create a new object group given the current object group (=group), the property name (=id) and the type of the property (=type).
|ObjectGroupTree::getOrCreateRoot(c,x clasp, proto)| is used to find or create a root object group given a JS class pointer and a prototype object.
Root objects are created lazily for each clasp/proto pair. Root group objects have an empty shape and non-root group objects inherit the shape from its parent.
Root map data structure: {clasp, proto} => root object group.
Group map data structure: {group, id, type} => object group.
I've tested the patch on this test case:
1
2 function f(obj) {
3 return (obj.x | 0) + (obj.y | 0) | 0;
4 }
5
6 console.log('start');
7 console.log(f({x: 1, y: 2}));
8 console.log(f({x: 2, y: 3}));
9
10 console.log('first loop');
11 function first() {
12 do {
13 f({x: 2, y: 3})
14 } while (!inIon());
15 }
16 first();
17
18 console.log('end first loop');
19
20 //console.log(f({x: 3, y: 4}));
21 //console.log(f({x: 4, y: 5}));
22 //console.log(f({x: 5, y: 6}));
23
24 console.log('second loop');
25 function second() {
26 do {
27 f({x: 2, y: 3})
28 } while (!inIon());
29 }
30 second();
31
32 console.log('done');
On a debug build with the patch applied, and the cli flags |-D --no-threads --cache-ir-stubs off --no-ggc --no-cgc --ion-inlining=off --ion-scalar-replacement=off| set, I get the following assembly dump for line 2:
IonScript [1 blocks]:
BB #0 [00000,3,4] :: 299 hits
[Parameter]
[Parameter]
[Start]
[CheckOverRecursed]
[OsiPoint]
.set .Llabel623, .
.set .Llabel623, .
.set .Llabel623, .
.set .Llabel623, .
[Unbox:Object]
movabsq $0x7fffffffffff, %rcx
andq 0x28(%rsp), %rcx
[LoadUnboxedScalar]
movl 0x10(%rcx), %eax
[LoadUnboxedScalar]
movl 0x14(%rcx), %edx
[AddI]
addl %edx, %eax
[Box:Int32]
movl $0xffffffff, %ecx
cmpq %rcx, %rax
jbe .Lfrom660
int3
.set .Llabel661, .
.set .Lfrom660, .Llabel661
movabsq $0xfff8800000000000, %rcx
orq %rax, %rcx
[Return]
Without the patch, the same assembly code is shown twice for line 2. The reason is that the IonScript is invalidated before entering the second loop, since the object groups do not match at that point without the patch:
[IonInvalidate] Start invalidation.
[IonInvalidate] Invalidate /home/smvv/work/tests/ObjectGroups/polymorphic-objects.js:2, IonScript 0x7ffff59e5b40
The results look promising, but there are a lot of things that I'm not sure about at this point. That's why I would like to ask feedback on this patch and idea in general. The patch avoids unnecessary re-compilation and generating polymorphic code when objects are allocated on different allocation sides.
Notes about the patch:
- The chunks that modify CacheIR are incomplete (that's the reason for using the JS shell flag: --ir-cache-stubs off).
- I think it is best to use a new JitSpew channel for the fprintf messages. Idea: use "ObjectGroup" as channel name?
- Computed and numeric indices are not tested. Right now, they are are probably creating way too many transitions.
- I added a 'feature gate' to allow comparing the execution of JS code with and without the use of the object group tree (see: |cx->useObjectGroupTree()|).
- Types can be reduced to {int,float,double,object} to reduce the ObjectGroupTree's fan-out, because all pointers are equal from the Shape point of view.
- I had to disable compacting and generational garbage collections (JS shell flags: --no-cgc --no-ggc) because the pointers in the root and group map are currently not updated after a minor GC.
Assignee: nobody → sandervv
Flags: needinfo?(jdemooij)
Flags: needinfo?(bhackett1024)
Updated•8 years ago
|
Comment 7•8 years ago
|
||
I think my biggest concern is about the degradation of type information this represents. Type inference uses object groups to let us distinguish between objects with different property information both when compiling Ion code and in the types of the properties themselves, so that in monomorphic code the JS heap can be traversed without needing any type barriers or other checks. Reducing the set of types which we track for heap properties so that we can't distinguish between different objects would severely reduce the power of the analysis.
Flags: needinfo?(bhackett1024)
Reporter | ||
Updated•7 years ago
|
Flags: needinfo?(jdemooij)
Updated•7 years ago
|
Keywords: triage-deferred
Priority: -- → P3
Comment 8•2 years ago
|
||
The bug assignee is inactive on Bugzilla, so the assignee is being reset.
Assignee: sandervv → nobody
Reporter | ||
Comment 9•2 years ago
|
||
This no longer applies now that groups are gone.
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INVALID
You need to log in
before you can comment on or make changes to this bug.
Description
•