Closed Bug 985304 Opened 11 years ago Closed 11 years ago

Update "resolving flexible lengths" algorithm to handle cases where flexibility sums to < 1

Categories

(Core :: Layout, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla32

People

(Reporter: dholbert, Assigned: dholbert)

References

(Blocks 1 open bug)

Details

Attachments

(4 files, 8 obsolete files)

(deleted), patch
MatsPalmgren_bugz
: review+
Details | Diff | Splinter Review
(deleted), patch
dholbert
: review+
Details | Diff | Splinter Review
(deleted), patch
dholbert
: review+
Details | Diff | Splinter Review
(deleted), patch
MatsPalmgren_bugz
: feedback+
Details | Diff | Splinter Review
The "resolving flexible lengths" algorithm has changed: http://dev.w3.org/csswg/css-flexbox-1/issues-cr-2012#issue-30 This is also mentioned in the "Changes" list at the bottom of the flexbox ED: { Changed algorithm for resolving flexible lengths to make behavior continuous as the sum of the flex factors approaches zero. (No change for a sum ≥ 1.) (Issue 30) Replaces this wording[1] with this one. [2] } [1] http://www.w3.org/TR/2012/CR-css3-flexbox-20120918/#resolve-flexible-lengths [2] http://dev.w3.org/csswg/css-flexbox/#resolve-flexible-lengths
Assignee: nobody → dholbert
Status: NEW → ASSIGNED
Depends on: 995386
Blocks: 996945
Actually, I think we can produce the same results as the updated algorithm, without having to rewrite very much, by simply adding a step to our existing code [which matches the old algorithm], as I noted here: http://lists.w3.org/Archives/Public/www-style/2014Apr/0253.html
Summary: Implement Tab's "resolving flexible lengths" algorithm when flexibility sums to < 1 → Update "resolving flexible lengths" algorithm to handle cases where flexibility sums to < 1
I've updated my suggested tweak to the old algorithm, as noted here: http://lists.w3.org/Archives/Public/www-style/2014Apr/0398.html I'll copypaste my tweak here, for convenience: ========== My suggested tweak now is now entirely contained in Step 5 of the old algorithm. Here it is, in context: # Step 5: # If the sign of the free space is positive and the # algorithm is using the flex grow factor, or if the # sign of the free space is negative and the # algorithm is using the flex shrink factor: # [BEGIN INSERTION:] # a) If the "original free space" has not yet been # initialized, then set it to the free space. # # b) Compute the sum of the unfrozen flex items' # flex factors. If this sum is less than 1, # compute the product of this sum and the # "original free space". If this product's # absolute value is less than the free space's # absolute value, then set the free space to # this product. # [END INSERTION] (adding "letter c" label to the following spec-text) # c) Distribute the free space to each flexible item's # main size in proportion to the item's flex factor: # [etc etc] For reference, the old spec text being modified here is at this URL: http://www.w3.org/TR/2012/CR-css3-flexbox-20120918/#resolve-flexible-lengths ========== In human language, the idea here is: If our flex factors sum to less than 1, then we treat them as a percentage of the original free space, instead of as weights. This is equivalent to simply scaling down the free space by multiplying it by the sum of the flex factors, and then treating the flex factors as weights like we normally would [no special percentage magic needed]. This is the way I've described it in my suggested spec-text (instead of any special "treat weight as percentage" special-cases), and it's the way I've implemented it.
OS: Linux → All
Attached patch part 1: rename "flex weight" to "weight" (obsolete) (deleted) — Splinter Review
This first patch renames "flex-weight" (which really represents "possibly-scaled flex factor") to "weight" in a bunch of places. It's a search-and-replace, with some comment tweaks. I'm using "weight" to mean "Flex factor, already scaled as-needed, ready to be used for a weighted distribution of our free space." Motivation: (a) "flex weight" isn't a meaningful term anymore -- it used to be defined in the spec, but now it's been renamed to "flex factor", with the nuance of the "scaled flex factor" being the actual weight that we use for flex-shrink) (b) the next patch is going to add some variables/functions for tracking the sum of our actual [not scaled] flex factor values (flex-grow or flex shrink), to check whether the sum is less than 1, per my suggested spec text above. And variables like "longVariableNameWithFlexWeight" & "longVariableNameWithFlexFactor" were too similar for my liking. They're easier to distinguish IMHO if I just use "Weight" in the former case.
Attachment #8412199 - Flags: review?(matspal)
Attached patch part 1 v2: rename "flex weight" to "weight" (obsolete) (deleted) — Splinter Review
(updated to make one whitespace change -- taking an opportunity to condense 2 lines to 1, after shortening a function-name).
Attachment #8412199 - Attachment is obsolete: true
Attachment #8412199 - Flags: review?(matspal)
Attachment #8412203 - Flags: review?(matspal)
Attached patch part 1 v2a: rename "flex weight" to "weight" (obsolete) (deleted) — Splinter Review
(sorry, attached the wrong patch file)
Attachment #8412203 - Attachment is obsolete: true
Attachment #8412203 - Flags: review?(matspal)
Attachment #8412205 - Flags: review?(matspal)
This patch: - Adds an accessor to FlexItem to let us get the *actual* flex factor (in particular, the actual flex-shrink value, instead of the *scaled* flex-shrink value) - Computes the sum of our unfrozen items' flex factors. - If that sum is a fraction less than 1, then don't hand out any more than that fraction of our original free space. - (and we establish our original free space on the first time through the loop where we actually have the opportunity to flex, from having free space whose sign aligns with our flex factor) In other words, this implements my suggested spec tweak from comment 3.
Attachment #8412209 - Flags: review?(matspal)
Comment on attachment 8412205 [details] [diff] [review] part 1 v2a: rename "flex weight" to "weight" (In reply to Daniel Holbert [:dholbert] from comment #3) > In human language, the idea here is: If our flex factors sum to less than 1, > then we treat them as a percentage of the original free space, instead of as > weights. Thanks for spelling that out. I doubt I would have realized that from just reading 9.7, so I would suggest something like the above should be in the spec as well.
Attachment #8412205 - Flags: review?(matspal) → review+
Comment on attachment 8412209 [details] [diff] [review] part 2: gracefully handle flex factors that sum to < 1 >layout/generic/nsFlexContainerFrame.cpp >+ float GetFlexFactor(bool aIsUsingFlexGrow) [...] >+ return aIsUsingFlexGrow ? >+ mFlexGrow : mFlexShrink; This fits on one line. > float runningWeightSum = 0.0f; >+ float runningFlexFactorSum = 0.0f; I think the "running" prefix just makes the code harder to read and doesn't add any value. It's quite clear from the comments and the code that both these are incremented in the loop that follows. I'd prefer just 'weightSum' and 'flexFactorSum'. > item->SetShareOfWeightSoFar(curWeight / > runningWeightSum); This fits on one line. > if (runningWeightSum != 0.0f) { // no distribution if no flexibility >+ MOZ_ASSERT(runningFlexFactorSum != 0.0f, >+ "running flex factor sum can't be 0, if a weighted sum " >+ "of its components is nonzero"); This is after said loop. So, here "running" just adds confusion since it's the final sums at this point. r=mats, with or without the "running" prefix as you see fit
Attachment #8412209 - Flags: review?(matspal) → review+
A couple of general observations on ResolveFlexibleLengths: I don't see why we need the SetShareOfWeightSoFar stuff. We're looping over the items twice anyway so could the first loop just calculate the sums and the second calculate each item's size directly? It might make the main size calculation clearer without the intermediary ShareOfWeightSoFar step (and we wouldn't need that field on the items). Also, would it be an optimization if the first loop counted the unfrozen items? so that we could bail out in the second loop after the last unfrozen item is done. Actually, there's a third loop before these two that could count those (it already does an IsFrozen() check on each item), and then skip both loops if there are none? It might be worth doing that even earlier, when forming the FlexLine or something, putting it in a member, then do an early return in ResolveFlexibleLengths if it's zero.
(In reply to Mats Palmgren (:mats) from comment #9) > > float runningWeightSum = 0.0f; > >+ float runningFlexFactorSum = 0.0f; > > I think the "running" prefix just makes the code harder to read and > doesn't add any value. It's quite clear from the comments and the > code that both these are incremented in the loop that follows. > I'd prefer just 'weightSum' and 'flexFactorSum'. Good point -- you're absolutely right, for "flexFactorSum" -- we only use that one after we've finished adding up all the values, so there's no "running" about it. I'll rename that one. For "runningWeightSum", though, the "running" is important, because we use its intermediate values as we add to it, e.g. in this line: item->SetShareOfWeightSoFar(curWeight / runningWeightSum); If that used "weightSum", then that would suggest that it's performing "current weight / total weight" -- but it's not total weight, its' the *weight we've seen up to this point*. > > if (runningWeightSum != 0.0f) { // no distribution if no flexibility > >+ MOZ_ASSERT(runningFlexFactorSum != 0.0f, > >+ "running flex factor sum can't be 0, if a weighted sum " > >+ "of its components is nonzero"); > > This is after said loop. So, here "running" just adds confusion since > it's the final sums at this point. Agreed. As noted above, I'm dropping the prefix for runningFlexFactorSum (and from the assertion text here), and I'll add a comment here to indicate that "runningWeightSum" becomes the total sum of all the weights at this point in the code. > r=mats, with or without the "running" prefix as you see fit Thanks! (addressed your other parts of comment 9 locally as well.)
(In reply to Daniel Holbert [:dholbert] from comment #11) > For "runningWeightSum", though, the "running" is important, because we use > its intermediate values as we add to it, e.g. in this line: > item->SetShareOfWeightSoFar(curWeight / > runningWeightSum); Just want to point out that I was aware of that use. > If that used "weightSum", then that would suggest that it's performing > "current weight / total weight" -- but it's not total weight, its' the > *weight we've seen up to this point*. But there's a comment explicitly pointing that out and the name "SetShareOfWeightSoFar" further highlights it. (There would be no need for "SoFar" if the sum was the final one.) I don't feel strongly about it though, leaving 'runningWeightSum' is fine if you still think the code would be misread without it. Perhaps in that case we should add a temp for the uses after the loop when it's final? eg: const auto weightSum = runningWeightSum; (If we can't eliminate SetShareOfWeightSoFar altogether that is.)
(In reply to Mats Palmgren (:mats) from comment #10) > I don't see why we need the SetShareOfWeightSoFar stuff. We're looping > over the items twice anyway so could the first loop just calculate the > sums and the second calculate each item's size directly? If we did it the way you suggest, it would indeed be clearer & more direct, but it'd be less guaranteed-to-distribute-exactly-all-of-the-free-space. It would only work **if** we could guarantee that the following equation holds for all possible flex values: flex1/totalFlex + flex2/totalFlex + ... + flexN/totalFlex == 1.0 (where totalFlex = flex1 + flex2 + ... + flexN) The problem is, we can *not* guarantee that, with float math, because we may lose some precision when we calculate totalFlex. For a concrete example, suppose we have 1000000px to distribute, and we have one flex item with "flex: 999999" and one flex item with "flex: 1". There are a few possible outcomes here, depending on how we implement it: (A) first item ends up at 999999px, second at 1px (ideal) (B) first item ends up at 1000000px, second at 0px (less ideal, but reasonable given float error) (C) first item ends up at 1000000px, second at 1px (BAD: The items overflow!!! We distributed too much width!) (D) first item ends up at 9999999px, second at 0px (BAD: we didn't distribute all of our space!) The strategy you're suggesting could easily produce cases (C) and (D) as a result of float error, because when huge & small flex values are added, we'll lose some precision, and then when we give each item a share based on the (lossy) total value, we could end up over-distributing or under-distributing. A hacky fix to this is, "just give any extra space to one of the flex items -- say the last one". My solution is a generalized version of that, with any error smeared out over the flex items. The idea is to walk backwards through the items & hand out each one the share that they know they deserve of the "remaining free space". (When we've finished and arrive at the first flexible item, it'll ask for 100% of the free space -- since its ShareOfWeightSoFar is 1.0 -- which will exactly use up any remaining free space.) This strategy isn't perfect -- e.g. it means that a "flex: 1" item at the beginning may receive a slightly different amount of free space than a "flex: 1" item at the end, if there are larger flexibilities in between them. (because the flex:1 items will end up with vastly-different "ShareOfWeightSoFar" values, which will be resolved against vastly-different "remaining free space" values). But it's impossible to completely avoid those sorts of problems, and this at least ensures that we distribute *exactly all of the free space*, which is my highest priority. > Also, would it be an optimization if the first loop counted the unfrozen > items? so that we could bail out in the second loop after the last > unfrozen item is done. Actually, there's a third loop before these two > that could count those (it already does an IsFrozen() check on each item), > and then skip both loops if there are none? It might be worth doing that > even earlier, when forming the FlexLine or something, putting it in > a member, then do an early return in ResolveFlexibleLengths if it's zero. Yeah, that sort of thing could save us a few loops over our items, in cases where someone has "flex:none" on everything. I like your last suggestion the best - adding a private "mAllItemsFrozen" to FlexLine or something. I'll file a followup on that. (In reply to Mats Palmgren (:mats) from comment #12) > But there's a comment explicitly pointing that out and the name > "SetShareOfWeightSoFar" further highlights it. > (There would be no need for "SoFar" if the sum was the final one.) Yeah, you're right; the "running" is probably clear enough from the context. (and it is a bit confusing later on after it's become the final sum and is no longer "running") I'll just drop the prefix entirely from this existing variable as a "patch 0" here.
Attached patch part 0: drop "running" prefix (deleted) — Splinter Review
Attachment #8412807 - Flags: review?(matspal)
(rebased part 1 on top of part 0, dropping "running" prefix)
Attachment #8412810 - Flags: review+
Here's part 2 with review comments addressed and with "running" prefixes dropped [rebased on top of part 0 & updated part 1]
Attachment #8412205 - Attachment is obsolete: true
Attachment #8412209 - Attachment is obsolete: true
Attachment #8412814 - Flags: review+
(tweaking part 2 to remove a comment that I added locally for clarification, & then meant to remove after I decided on dropping the "running" prefix.)
Attachment #8412814 - Attachment is obsolete: true
Attachment #8412823 - Flags: review+
Attachment #8412807 - Flags: review?(matspal) → review+
(In reply to Daniel Holbert [:dholbert] from comment #13) Thanks for elaborating. I agree we must avoid (C) and should avoid (D). I don't think (B) matters if it only occurs for edge cases (huge number of items, or extreme lengths or weights). The current algorithm is nice, but it always requires two passes over the items. I'm wondering if there's an algorithm, which can avoid (C) and (D) and only rarely end up with (B), that only requires a second pass when we actually have a rounding error. So in most cases it would only need one pass.
(In reply to Mats Palmgren (:mats) from comment #18) > (In reply to Daniel Holbert [:dholbert] from comment #13) > I don't think (B) matters if it only occurs for edge cases (huge number > of items, or extreme lengths or weights). Agreed. > I'm wondering if there's an algorithm, which can avoid (C) and (D) and only > rarely end up with (B), that only requires a second pass when we actually > have a rounding error. So in most cases it would only need one pass. I doubt it. I think flexbox-style weighted distribution of free space *must* require at least 2 passes over the items, because before we know how much free space any of our items gets, we have to know: (a) how much free space there is (have to subtract all of the flex base sizes from container's size) (b) what the sum of the flex values is A flex item's e.g. "flex-grow: 5" value is meaningless on its own (for determining how much free space it should get), until we've computed both (a) and (b), and computing those things requires looping over all of the items. (Plus: we may need to repeat this process several times, if we violate a flex item's min-size / max-size.)
(Having said that, it's definitely possible that we could optimize a bit by pre-calculating more and/or merging some loops together. I've tried to split up conceptually-separate steps, to make the code readable/maintainable, but it's possible there's some room for steps to be overlayed / combined.)
(In reply to Daniel Holbert [:dholbert] from comment #19) > I think flexbox-style weighted distribution of free space *must* require at > least 2 passes over the items, because before we know how much free space > any of our items gets, we have to know: > (a) how much free space there is (have to subtract all of the flex base > sizes from container's size) > (b) what the sum of the flex values is Yes, but currently we're doing three passes over the items. I was hoping we could bring that down to two. I think we could do (b) in the first loop here: http://mxr.mozilla.org/mozilla-central/source/layout/generic/nsFlexContainerFrame.cpp?rev=16bca471e1dc#1654
(In reply to Mats Palmgren (:mats) from comment #21) > Yes, but currently we're doing three passes over the items. I was hoping we > could bring that down to two. I think we could do (b) in the first loop > here: > http://mxr.mozilla.org/mozilla-central/source/layout/generic/ > nsFlexContainerFrame.cpp?rev=16bca471e1dc#1654 True; though, it's not clear to me that this would improve things, since we only *need* to compute (b) (which is a bunch of float math and hence has non-trivial cost) if we have the right value for (a) (the "If sign of free space matches the type of flexing that we're doing" check). In other words: depending on what (a) produces, we may or may not need to bother computing (b) and handing out any free space. So, without profiling, it's not clear to me that merging (a) and (b) into the same loop is a net win. (In reply to Daniel Holbert [:dholbert] from comment #13) > I like your last suggestion the > best - adding a private "mAllItemsFrozen" to FlexLine or something. I'll > file a followup on that. FWIW, I filed bug 1001653 on this.
In other words: * Upside of merging (a) and (b): better cache locality. [I assume that's what you're going for?] * Downside of merging (a) and (b): potentially-wasted work on computing (b) before we know if we'll need it. (Also, less significant downside: possibly harder-to-follow code from doing separate steps at the same time.) Maybe the cache locality win is significant enough to more-than-compensate for that downside, but I'm in no way sure that that's the case.
Also, FWIW: I have mochitests for this bug (adding to flexbox_layout_testcases.js, used by mochitest test_flexbox_layout.html) which I need to finish off and post for review before this is ready to land.
Flags: in-testsuite?
No longer depends on: 995386
(In reply to Daniel Holbert [:dholbert] from comment #22) > True; though, it's not clear to me that this would improve things, since we > only *need* to compute (b) (which is a bunch of float math and hence has > non-trivial cost) if we have the right value for (a) (the "If sign of free > space matches the type of flexing that we're doing" check). Fair enough, but how common is it in practice that the available free space is zero when you have flexible items? Also, maybe we could add up the weights just once and then subtract the weight of an item when we freeze it? (In reply to Daniel Holbert [:dholbert] from comment #23) > * Upside of merging (a) and (b): better cache locality. [I assume that's > what you're going for?] I was hoping to eliminate the third pass over the items by using a different method of distributing the rounding error, i.e. not at all if there isn't any.
(In reply to Mats Palmgren (:mats) from comment #25) > Fair enough, but how common is it in practice that the available free space > is zero when you have flexible items? Probably not very common (though it's not only when the free space is zero -- it's when the *sign* of the free space doesn't match "the type of flexing that we're ultimately doing" [based on isUsingFlexGrow, which uses the min/max-clamped flex items' base sizes and sees whether they overflow or underflow. The rest of the algorithm doesn't do any up-front clamping and can end up producing intermediate free-space values that have the opposite sign from the flexibility that we're using. A simple example of this is a flex item with a huge flex-basis but a small flex-width; this will make us set isUsingFlexGrow = true, but then on the first time through this code, we'll have negative free space and we'll skip the width distribution so that we can get straight to the clamping/freezing code at the end.) Anyway; not very common, but possibly more common once we've addressed bug 984711 and start applying nonzero min-sizes to flex items by default. > Also, maybe we could add up the weights just once and then subtract the > weight > of an item when we freeze it? Maybe, though that could produce the wrong result due to float error, since subtracting float values is lossy. e.g. (smallFlex + largeFlex) - largeFlex won't necessarily give you back smallFlex. Maybe if we use a double, though, this would be sufficiently small to not cause trouble. I've tried to avoid relying on doubles as a way of avoiding float error, because that feels like cheating. :) (And it could just push problems off a bit without really fixing them.) But in any case, it's pretty rare, in practice, that we have to freeze flex items for min/max violations & run through this process more than once or twice. It'd require very specially-crafted content where you violate *only a few* of the items' min/max-widths after the first attempt to distribute space, and then violate *only a few* of the remaining items' min/max widths on the next loop, etc etc. So it may be too edge-casey to worry about optimizing for.
> simple example of this is a flex item with a huge flex-basis but a small flex-width; Sorry, I meant "a small max-width".
Actually, I think I want to change one thing about the algorithm in comment 3. Instead of waiting to initialize "original free space" until the sign of our free space matches the type of flexing that we're doing, I think we should *always* initialize "original free space" in the first loop, and just clamp it to 0 if its sign doesn't match our type of flexing. Otherwise, we end up with a discontinuity in behavior between these cases: i) a flex item w/ max-size & flex base size *just* large enough to make us initially overflow the container (and then clamp and not overflow anymore) ii) a flex item w/ max-size & flex base size *not quite* large enough to make us initially overflow My algorithm from comment 3 would give us a significantly different "original free space" value for these two cases (and would hence produce significantly different sizes for any other flex items, if their flex values are fractional and sum to less than 1). In (i), it would set that "original free space" to a large value *after* clamping the max-size-clamped item; in (ii), it would set the "original free space" to a small value *before* clamping. But if we set the "original free space" (to 0) before clamping in case (i), then there won't be a discontinuity. I'm not 100% sure about this, though, and I think it leaves us with a discontinuity from when the flexible items' sum goes from < 1 to > 1 in this sort of situation... Not sure if that's avoidable. Need to think a bit more, and perhaps defer this to a followup / to when I get some more feedback on my www-style posts.
I think I could address comment 28 by allowing the "original free space" to increase (in magnitude, i.e. more-positive or more-negative, depending on the type of flexibility we're doing), whenever the free space is larger than it. This would diverge more substantially from what the spec currently has, though, so I'm going to hold off on making that tweak until there's been a response to my questions about the new algorithm on www-style. I'm including some testcases in my upcoming mochitests-patch that demonstrate the discontinuities mentioned in comment 28. If we can end up addressing these discontinuities (in a followup bug), then we can update the testcases to no longer be discontinuous.
Here are the tests I've written for this. Requesting f? instead of r?, since I don't think this needs a thorough review (since it's extending an existing test), but it should probably at least get a quick once-over. (This patch's modified file, flexbox_layout_testcases.js, is used by test_flexbox_layout.html to dynamically generate flex containers with a variety of orientations & measure their children's actual computed width or height against the expected value encoded in the testcase.) In most cases, I've included prose and/or inline comments with notes about where the expected values come from, to hopefully make it easier to remember what's going on if & when any of these tests starts failing.
Attachment #8416182 - Flags: feedback?(matspal)
(forgot to qref in a comment-tweak. Fixed now.)
Attachment #8416182 - Attachment is obsolete: true
Attachment #8416182 - Flags: feedback?(matspal)
Attachment #8416184 - Flags: feedback?(matspal)
sorry, one more update -- I noticed one other chunk that needed the same comment-tweak from previous update. (its 'flex-shrink' analogue). Fixed that as well. Also: in this patch, the testcases for the discontinuities mentioned in comment 28 & comment 29 are flagged with XXXdholbert, FWIW.
Attachment #8416184 - Attachment is obsolete: true
Attachment #8416184 - Flags: feedback?(matspal)
Attachment #8416188 - Flags: feedback?(matspal)
(gah, and I introduced a ")" in my last tweak which broke the test. Fixed here. & now, done spamming this bug for the day. :))
Attachment #8416188 - Attachment is obsolete: true
Attachment #8416188 - Flags: feedback?(matspal)
Attachment #8416232 - Flags: feedback?(matspal)
Comment on attachment 8416232 [details] [diff] [review] part 3 v4: extend mochitests to cover cases where flexibilities sum to < 1 Do we need a followup bug to track the issues in comment 28/29? (perhaps it can wait if it's sorted out on www-style instead)
Attachment #8416232 - Flags: feedback?(matspal) → feedback+
Blocks: 1005660
Flags: in-testsuite? → in-testsuite+
Blocks: 1200261
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: