Closed
Bug 1391293
Opened 7 years ago
Closed 7 years ago
[photon] Don't remove search suggestions in the URL bar until a new suggestion set has arrived to avoid flickering
Categories
(Firefox :: Address Bar, enhancement, P1)
Tracking
()
RESOLVED
FIXED
Firefox 57
Tracking | Status | |
---|---|---|
firefox57 | --- | fixed |
People
(Reporter: phlsa, Assigned: mak)
References
Details
(Whiteboard: [fxsearch])
Attachments
(3 files)
When typing a letter in the URL bar, it returns a mix of history results and search suggestions. The suggestions usually take a few ms longer to arrive, which leads to a noticeable and annoying flicker.
This is particularly visible when browser.urlbar.matchBuckets is set to a value that puts search suggestions to the top of the result list.
We should avoid this situation by keeping the old result set (of search suggestions) around until the new set has arrived.
Updated•7 years ago
|
Priority: -- → P1
Whiteboard: [fxsearch]
Assignee | ||
Comment 1•7 years ago
|
||
don't we already do this, and basically that's the cause of bug 1291927?
(In reply to (Currently slow to respond) Philipp Sackl [:phlsa] (Firefox UX) please use needinfo from comment #0)
> The suggestions usually take a few ms longer to arrive,
> which leads to a noticeable and annoying flicker.
We should measure that, I got various reports that state the exact opposite. It really depends on a lot of factors (mechanical disk vs ssd, low latency vs high latency network, DSL vs fiber...). I'd not assume something based just on our experience.
Assignee | ||
Comment 2•7 years ago
|
||
Sorry, some more stream of consciousness. Basically, if local results come first, they are added in the place of the previous suggestions, when new suggestions arrive and retake their place again.
I fear this won't be trivial to fix in a proper way, because it's just how the populating happens, top-down.
The fact is the new search has no idea at all if a search suggestion will come. "booking" the space for an eventual suggestion may require adding complex code to handle the case where it won't happen. We should also test selection behavior. Plus if the user types "amd" and a previous suggestion for "am" was "amazon", the old suggestion has nothing to do with the newly typed string.
We could surely do some string comparison hack, if the old suggestion is an extension of the typed text, keep it around, otherwise proceed as usual. It wouldn't be a complete fix and it wouldn't work very well with fuzzy matches, for which we can't just do a "starts with" check.
Reporter | ||
Comment 3•7 years ago
|
||
Some good points there!
The core issue seems to be this: the async, top-down nature of the awesomebar conflicts with the bucketing approach, leading to a noisy experience. I think we will have some version of this issue for any kind of bucketing we do, so we'll need to figure this out before shipping.
(In reply to Marco Bonardo [::mak] from comment #1)
> don't we already do this, and basically that's the cause of bug 1291927?
I don't think this is the same issue. If I'm understanding that bug correctly, it is about starting an entirely new search. This bug, however, shows up on every typed character within a search.
Example: let's assume I have the string »am« in the bar already and it is showing me 3 history results and 3 search suggestions.
Now I type an additional »a«, leading to the string »ama«.
For a split second, my search suggestions are replaced with history results, before then being replaced with search suggestions again.
(In reply to Marco Bonardo [::mak] from comment #2)
> I fear this won't be trivial to fix in a proper way, because it's just how
> the populating happens, top-down.
Hm, are we retrieving suggestions one by one from the search engine, or as a block?
> The fact is the new search has no idea at all if a search suggestion will
> come. "booking" the space for an eventual suggestion may require adding
> complex code to handle the case where it won't happen. We should also test
> selection behavior. Plus if the user types "amd" and a previous suggestion
> for "am" was "amazon", the old suggestion has nothing to do with the newly
> typed string.
Unless there is some massive (seconds) network latency, this shouldn't really make a difference.
If we are dealing with this kind of latency, we could take a page out of Cliqz book: they »freeze« the results once they detect intent to select (arrow down or mouse move).
Reporter | ||
Comment 4•7 years ago
|
||
Here's a screen recording of the issue as it shows up on my machines.
Assignee | ||
Comment 5•7 years ago
|
||
(In reply to (Currently slow to respond) Philipp Sackl [:phlsa] (Firefox UX) please use needinfo from comment #3)
> Example: let's assume I have the string »am« in the bar already and it is
> showing me 3 history results and 3 search suggestions.
> Now I type an additional »a«, leading to the string »ama«.
> For a split second, my search suggestions are replaced with history results,
> before then being replaced with search suggestions again.
Yes, even if a string had suggestions, we cannot if string+char will return any search suggestions. Surely we can guess that adding one letter may still return some suggestions, but due to fuzzy matching we can't be sure. "ama" could match "I am a robot" while "amad" wouldn't. The order of suggestions could also be changed by the search engine depending on the typed string.
Thus we need to store previous suggestions and filter them out of the new string, then we may still need to replace them in-place if the order changes remotely.
It's not impossible, it's just not a trivial one liner as it may sound.
> Hm, are we retrieving suggestions one by one from the search engine, or as a
> block?
As a block, but I'm not sure how that changes things.
> > Plus if the user types "amd" and a previous suggestion
> > for "am" was "amazon", the old suggestion has nothing to do with the newly
> > typed string.
> Unless there is some massive (seconds) network latency, this shouldn't
> really make a difference.
We cannot predict latencies, nor assume all the networks are the same. people still have access to horrible Internet connections around various areas of the world (I don't live in the desert and just got a 20Mbps DSL this year, for example, I won't see better speeds for at least 3 years).
I still don't think it's a good idea to show non-matching things, in general.
> If we are dealing with this kind of latency, we could take a page out of
> Cliqz book: they »freeze« the results once they detect intent to select
> (arrow down or mouse move).
That sounds like would make things even worse, cause at that point the wrong suggestion would be frozen in results and usable.
We need to do some testing, I think we can reduce the flickering, I just don't think we can reuse 100% of the previous results. And we need to augment the autocomplete results APIs to be able to replace a match.
Reporter | ||
Comment 6•7 years ago
|
||
(In reply to Marco Bonardo [::mak] from comment #5)
> (In reply to (Currently slow to respond) Philipp Sackl [:phlsa] (Firefox UX)
> please use needinfo from comment #3)
> > Example: let's assume I have the string »am« in the bar already and it is
> > showing me 3 history results and 3 search suggestions.
> > Now I type an additional »a«, leading to the string »ama«.
> > For a split second, my search suggestions are replaced with history results,
> > before then being replaced with search suggestions again.
>
> Yes, even if a string had suggestions, we cannot if string+char will return
> any search suggestions. Surely we can guess that adding one letter may still
> return some suggestions, but due to fuzzy matching we can't be sure. "ama"
> could match "I am a robot" while "amad" wouldn't. The order of suggestions
> could also be changed by the search engine depending on the typed string.
> Thus we need to store previous suggestions and filter them out of the new
> string, then we may still need to replace them in-place if the order changes
> remotely.
Maybe I still don't understand the full picture...
Currently the process of typing a letter looks like this:
State 1 => State without suggestions => State 2
What I think you are describing is something like this:
State 1 => local prediction of State 2 => State 2
What I think we should do is:
State 1 => State 1 => State 2
Looking at other products, Safari and Amazon (in their search suggestions on Amazon.com) are using this pattern. Safari is actually really close to what we are doing since they are also putting history and search suggestions into two respective buckets.
> It's not impossible, it's just not a trivial one liner as it may sound.
Not saying it's trivial, just saying it's important :)
> > Hm, are we retrieving suggestions one by one from the search engine, or as a
> > block?
>
> As a block, but I'm not sure how that changes things.
Well, it means that we can be sure that suggestions only come in once. As soon as they are in, they are in, for any given query (AIUI, history results come one by one, in contrast).
So at least we first think that there is just one suggestion and we render it, but then another one comes in and then another one.
> We cannot predict latencies, nor assume all the networks are the same.
> people still have access to horrible Internet connections around various
> areas of the world (I don't live in the desert and just got a 20Mbps DSL
> this year, for example, I won't see better speeds for at least 3 years).
> I still don't think it's a good idea to show non-matching things, in general.
Right, we can't predict it – but I'm not sure I understand how the current behavior is better than the proposal at any rate of latency.
> > If we are dealing with this kind of latency, we could take a page out of
> > Cliqz book: they »freeze« the results once they detect intent to select
> > (arrow down or mouse move).
>
> That sounds like would make things even worse, cause at that point the wrong
> suggestion would be frozen in results and usable.
Well, at that point the user has already deemed what they currently see as The Right Thing. So while it might not be consistent, it will be the thing that the user is trying to get to and by freezing it we prevent the result from changing underneath the users cursor.
Assignee | ||
Comment 7•7 years ago
|
||
(In reply to (Currently slow to respond) Philipp Sackl [:phlsa] (Firefox UX) please use needinfo from comment #6)
> What I think you are describing is something like this:
> State 1 => local prediction of State 2 => State 2
>
> What I think we should do is:
> State 1 => State 1 => State 2
That's correct, solution 2 is exactly how autocomplete was _NOT_ designed, so it requires far more work, but it's a more general solution. Basically we need to be able to replace State 1 matches with State 2 matches in place, and, when we are sure that State 2 is stable, remove all the matches that appear in State 1 but not in state 2 AND reappend all the State 2 results that have been pushed out of the popup because State 1 matches were kept in place. It's a complete rewrite of the autocomplete results addition.
What I was suggesting was instead an half solution where we predict if a State 1 result matches State 2, and then we just assume it will be kept. That only requires in-place replacement of whatever doesn't match. It should be easier to do, but will be far less generic.
Safari has a much easier life since they keep things separated, so each section has its own results, and can just be hidden if there are no results for that section. On the other side, that approach also requires rewrite of the UI presentation.
> Well, at that point the user has already deemed what they currently see as
> The Right Thing. So while it might not be consistent, it will be the thing
> that the user is trying to get to and by freezing it we prevent the result
> from changing underneath the users cursor.
On the other side, this breaks all the users that move the mouse while typing, we got a lot of complains in the past when we tried to do such a thing (yes, we tried already). It may sound like an exotic use case, but especially on Windows some users are used to type a couple letters and pick the result with the mouse, and through muscle memory they tend to move towards the result before it's even shown. It could surely be a vocal minority, but I can understand how much it would be frustrating.
Updated•7 years ago
|
Summary: Don't remove search suggestions in the URL bar until a new suggestion set has arrived to avoid flickering → [photon] Don't remove search suggestions in the URL bar until a new suggestion set has arrived to avoid flickering
Reporter | ||
Comment 8•7 years ago
|
||
(In reply to Marco Bonardo [::mak] from comment #7)
> (In reply to (Currently slow to respond) Philipp Sackl [:phlsa] (Firefox UX)
> please use needinfo from comment #6)
> > What I think you are describing is something like this:
> > State 1 => local prediction of State 2 => State 2
> >
> > What I think we should do is:
> > State 1 => State 1 => State 2
>
> That's correct, solution 2 is exactly how autocomplete was _NOT_ designed,
> so it requires far more work, but it's a more general solution. Basically we
> need to be able to replace State 1 matches with State 2 matches in place,
> and, when we are sure that State 2 is stable, remove all the matches that
> appear in State 1 but not in state 2 AND reappend all the State 2 results
> that have been pushed out of the popup because State 1 matches were kept in
> place. It's a complete rewrite of the autocomplete results addition.
>
> What I was suggesting was instead an half solution where we predict if a
> State 1 result matches State 2, and then we just assume it will be kept.
> That only requires in-place replacement of whatever doesn't match. It should
> be easier to do, but will be far less generic.
>
> Safari has a much easier life since they keep things separated, so each
> section has its own results, and can just be hidden if there are no results
> for that section. On the other side, that approach also requires rewrite of
> the UI presentation.
Well, I should probably stay out of the details of the implementation here. You're the expert there.
As long as we agree on what problems we are and aren't solving (and that flickering=bad), we should be good :)
> > Well, at that point the user has already deemed what they currently see as
> > The Right Thing. So while it might not be consistent, it will be the thing
> > that the user is trying to get to and by freezing it we prevent the result
> > from changing underneath the users cursor.
>
> On the other side, this breaks all the users that move the mouse while
> typing, we got a lot of complains in the past when we tried to do such a
> thing (yes, we tried already). It may sound like an exotic use case, but
> especially on Windows some users are used to type a couple letters and pick
> the result with the mouse, and through muscle memory they tend to move
> towards the result before it's even shown. It could surely be a vocal
> minority, but I can understand how much it would be frustrating.
Could you point me to the bug(s) or discussion on this? We can also table this for now or move it to a separate bug since it's an orthogonal line of thought.
Assignee | ||
Updated•7 years ago
|
Assignee: nobody → mak77
Status: NEW → ASSIGNED
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Assignee | ||
Comment 11•7 years ago
|
||
Writing automated tests checking what happens "while" we add results is complicated, what we really care about is the final state, but I could only guess tests. Off-hand I think we have a decent coverage from existing tests, and we should add more tests once edge cases are disclosed, so they are not just random tests added for the sake of it.
That said, considered the change I'd appreciate a lot help from the team testing this, and a confirmation from Philipp that is satisfies his requirements, or at least it gets close.
I triggered a try just now, it will take some time, see the try link in the mozreview.
Flags: needinfo?(philipp)
Assignee | ||
Comment 12•7 years ago
|
||
hm, looks like the reflow tests are a bit less precise, on the 3 platforms (and also opt-debug) it now sees slightly different number of reflows (+-2).
Maybe it's just matter of increasing the timeout, will try.
Assignee | ||
Comment 13•7 years ago
|
||
Looks like Mac and Windows just count different reflows :( Windows hits 6 less reflows. Also tried changing the idle timeout with a normal 1s timeout, no luck.
My local system also counts a different number of reflows compared to Try.
Comment 14•7 years ago
|
||
(In reply to Marco Bonardo [::mak] from comment #13)
> Looks like Mac and Windows just count different reflows :( Windows hits 6
> less reflows. Also tried changing the idle timeout with a normal 1s timeout,
> no luck.
> My local system also counts a different number of reflows compared to Try.
Is it possible that the screen (or window really) size plays a role here? IIRC some of these reflow are triggered by overflow events, which may not happen with a bigger window.
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment 17•7 years ago
|
||
mozreview-review |
Comment on attachment 8908866 [details]
Bug 1391293 - Remove some hundreds reflows and adapt reflow tests to the new insert method.
https://reviewboard.mozilla.org/r/180486/#review186054
Looks reasonable. These reflow tests are unfortunately harder to work with than I would like. I hope we'll make them nicer in the future. Note: I haven't tried this locally, only looked at the code in this patch.
::: browser/base/content/test/performance/head.js:6
(Diff revision 2)
> "use strict";
>
> XPCOMUtils.defineLazyModuleGetter(this, "PlacesTestUtils",
> "resource://testing-common/PlacesTestUtils.jsm");
> +XPCOMUtils.defineLazyModuleGetter(this, "AppConstants",
> + "resource://gre/modules/AppConstants.jsm");
Are you using this anywhere?
::: browser/base/content/test/performance/head.js:147
(Diff revision 2)
> try {
> dirtyFrameFn();
> await testFn(dirtyFrameFn);
> } finally {
> for (let remainder of expectedReflows) {
> + if (!Number.isInteger(remainder.minTimes) || remainder.minTimes > 0) {
We need an example in the comment above withReflowObserver documenting the use of minTimes
::: browser/base/content/test/performance/head.js:251
(Diff revision 2)
> /**
> * Adds some entries to the Places database so that we can
> * do semi-realistic look-ups in the URL bar.
> */
> async function addDummyHistoryEntries() {
> + await PlacesTestUtils.clearHistory();
Probably doesn't hurt, but why do we need this? Is there another test leaving things behind?
Attachment #8908866 -
Flags: review?(florian) → review+
Assignee | ||
Comment 18•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8908866 [details]
Bug 1391293 - Remove some hundreds reflows and adapt reflow tests to the new insert method.
https://reviewboard.mozilla.org/r/180486/#review186054
> Are you using this anywhere?
Not anymore, I used it, and then thought to leave it here for the future. I can remove it.
> Probably doesn't hurt, but why do we need this? Is there another test leaving things behind?
it's very likely and that's what I saw. In any case tests that depend strictly on certain history states like this one should always clear history first.
Comment 19•7 years ago
|
||
Some drive-by review comments (I find it quite fun to look at code; I hope you don't mind):
* To call to _cleanUpNonCurrentMatches in notifyResults feels iffy: it's reentrant, and it will trigger almost immediately on the heuristic result's _addMatch call. Seems like it could be moved either to the start of the search, somewhere further down in Search#execute, or to _addMatch (triggering on the first search-or-history match), depending on what behavior is intended.
* The in-place replacement seems to silently assume that bucketing (matchBuckets vs matchBucketsSearch) doesn't change between search. It's probably still an improvement in the case where it does, but it feels worth noting.
* "delete o.x" => "o.x = null" for performance, maybe. ("delete o.x" puts "o" into dictionary mode.)
* awaiting this._searchSuggestionsCompletePromise before the MATCH_BOUNDARY_ANYWHERE search kicks off seems bad on slow connections -- _cleanUpNonCurrentMatches could just as well happen off a promise.then, right?
Assignee | ||
Comment 20•7 years ago
|
||
(In reply to Simon Lindholm from comment #19)
> * To call to _cleanUpNonCurrentMatches in notifyResults feels iffy: it's
> reentrant, and it will trigger almost immediately on the heuristic result's
> _addMatch call.
Note that it only happens for a restrict behavior and the search is no more ongoing.
what do you mean by reentrant?
> * The in-place replacement seems to silently assume that bucketing
> (matchBuckets vs matchBucketsSearch) doesn't change between search. It's
> probably still an improvement in the case where it does, but it feels worth
> noting.
Yes, it will still do a decent work so I decided to accept that.
> * awaiting this._searchSuggestionsCompletePromise before the
> MATCH_BOUNDARY_ANYWHERE search kicks off seems bad on slow connections --
> _cleanUpNonCurrentMatches could just as well happen off a promise.then,
> right?
I'll look into that. The idea is that in general suggestions may be more important than fallback local results, but I get your point.
Comment 21•7 years ago
|
||
(In reply to Marco Bonardo [::mak] from comment #20)
> (In reply to Simon Lindholm from comment #19)
> > * To call to _cleanUpNonCurrentMatches in notifyResults feels iffy: it's
> > reentrant, and it will trigger almost immediately on the heuristic result's
> > _addMatch call.
>
> Note that it only happens for a restrict behavior and the search is no more
> ongoing.
> what do you mean by reentrant?
>
Ahh -- I missed that that was an else if. It makes a lot more sense then.
By reentrant I meant that notifyResults calls _cleanUpNonCurrentMatches which calls notifyResults. Harmless, but it makes the logic slightly harder to follow (need to verify that no rug is being pulled under our feet), and might cause some unnecessary listener invocations.
Comment 22•7 years ago
|
||
mozreview-review |
Comment on attachment 8908865 [details]
Bug 1391293 - Reduce Address Bar results flickering when typing additional chars.
https://reviewboard.mozilla.org/r/180484/#review186090
::: toolkit/components/places/UnifiedComplete.js:48
(Diff revision 2)
> ["maxHistoricalSearchSuggestions", 0],
> ["usepreloadedtopurls.enabled", true],
> ["usepreloadedtopurls.expire_days", 14],
> ["matchBuckets", "general:5,suggestion:Infinity"],
> ["matchBucketsSearch", ""],
> + ["insertMethod", 1], // 0: append, 1: replace-if-related-strings, 2: replace
Can you define consts for these? There are magic numbers in the code otherwise.
::: toolkit/components/places/UnifiedComplete.js:1129
(Diff revision 2)
> + // Ideally we should wait until MATCH_BOUNDARY_ANYWHERE, but that query
> + // may be really slow and we may end up showing old results for too long.
> + this._cleanUpNonCurrentMatches(MATCHTYPE.GENERAL);
> +
> + // Ensure to fill any remaining space.
> + await this._searchSuggestionsCompletePromise;
I'm wondering if this makes it unlikely that most people will ever see any match-anywhere and pre-loaded results?
::: toolkit/components/places/UnifiedComplete.js:1132
(Diff revision 2)
> +
> + // Ensure to fill any remaining space.
> + await this._searchSuggestionsCompletePromise;
> + this._cleanUpNonCurrentMatches(MATCHTYPE.SUGGESTION);
> + if (!this.pending)
> + return;
Indentation is wrong here
::: toolkit/components/places/UnifiedComplete.js:1659
(Diff revision 2)
> let content = `${this._searchTokens[0]} ${suggestion.content}`;
> this._addExtensionMatch(content, suggestion.description);
> }
> }
> );
> + // Remove previous search matches sooner than the maximum timeout, otherwise
Why is it important to do this for extension matches and not other types?
::: toolkit/components/places/UnifiedComplete.js:1669
(Diff revision 2)
> // results, we add a timeout racing with the addition.
> let timeoutPromise = new Promise((resolve, reject) => {
> setTimeout(() => reject(new Error("timeout waiting for the extension to add its results to the location bar")),
> MAXIMUM_ALLOWED_EXTENSION_TIME_MS);
> });
> - this._allMatchesPromises.push(Promise.race([timeoutPromise, promise]).catch(Cu.reportError));
> + this._extensionsCompletePromise = Promise.race([timeoutPromise, promise]).catch(Cu.reportError);
Could this method maybe return a promise instead of relying on a property? It would be clearer I think, but up to you. Same question for _searchSuggestionsCompletePromise.
::: toolkit/components/places/UnifiedComplete.js:1900
(Diff revision 2)
> match.value,
> match.comment,
> match.icon,
> match.style,
> match.finalCompleteValue);
> + this._matchCount++;
Is it right to increment _matchCount if `replace` is true? I think so since that's what your comment describing _matchCount implies I think, but just want to make sure. Maybe rename it _currentMatchCount or _currentSearchMatchCount?
::: toolkit/components/places/UnifiedComplete.js:2359
(Diff revision 2)
> - let resultCode = result.matchCount ? "RESULT_SUCCESS" : "RESULT_NOMATCH";
> + let resultCode = this._matchCount ? "RESULT_SUCCESS" : "RESULT_NOMATCH";
> if (searchOngoing) {
> resultCode += "_ONGOING";
> + } else if (this.hasBehavior("restrict") &&
> + this._previousSearchMatchTypes.length > 0) {
> + // Remove any leftover results from the previous search.
It doesn't seem right to do this from a method named "notifyResults". notifyResults is called by _cleanUpNonCurrentMatches, so couldn't you do this in _cleanUpNonCurrentMatches?
Attachment #8908865 -
Flags: review?(adw) → review+
Assignee | ||
Comment 23•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8908865 [details]
Bug 1391293 - Reduce Address Bar results flickering when typing additional chars.
https://reviewboard.mozilla.org/r/180484/#review186090
> I'm wondering if this makes it unlikely that most people will ever see any match-anywhere and pre-loaded results?
It shouldn't, with the bucketing system there's always space for local results if a bucket has space. In any case, I moved back the search suggestions promise await to the bottom, and just used its thenable to cleanup old suggestions.
> Why is it important to do this for extension matches and not other types?
Because unfortunately extensions don't have an API to notify when they are done adding matches. So what we do is give them 5 seconds after which we consider them done. But we don't want to wait 5 seconds before removing their previous matches.
I added a comment.
> Could this method maybe return a promise instead of relying on a property? It would be clearer I think, but up to you. Same question for _searchSuggestionsCompletePromise.
I made the change, but notice that we can't await on extensions and suggestions, since their timing is unpredictable, so we still need to wait for them at the end through a var.
> Is it right to increment _matchCount if `replace` is true? I think so since that's what your comment describing _matchCount implies I think, but just want to make sure. Maybe rename it _currentMatchCount or _currentSearchMatchCount?
Changed the name, it is correct to increment _matchCount because it's the "real" one, we always replace no mo current results, not the current one.
> It doesn't seem right to do this from a method named "notifyResults". notifyResults is called by _cleanUpNonCurrentMatches, so couldn't you do this in _cleanUpNonCurrentMatches?
Moved the code to finishSearch and also avoided the reentrancy.
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment 28•7 years ago
|
||
Pushed by mak77@bonardo.net:
https://hg.mozilla.org/integration/autoland/rev/33a41b9cf92a
Reduce Address Bar results flickering when typing additional chars. r=adw
https://hg.mozilla.org/integration/autoland/rev/a03036beafc0
Remove some hundreds reflows and adapt reflow tests to the new insert method. r=florian
Comment 29•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/33a41b9cf92a
https://hg.mozilla.org/mozilla-central/rev/a03036beafc0
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
status-firefox57:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 57
Updated•6 years ago
|
You need to log in
before you can comment on or make changes to this bug.
Description
•