Closed
Bug 762710
Opened 12 years ago
Closed 7 years ago
Hangs viewing TBPL's parsed 'full' logs (large blocks of text inside <pre>)
Categories
(Core :: Layout, defect)
Core
Layout
Tracking
()
RESOLVED
INCOMPLETE
People
(Reporter: emorley, Unassigned)
References
(Depends on 1 open bug, Blocks 1 open bug)
Details
(Keywords: sheriffing-P1)
Attachments
(2 obsolete files)
(There's the similar bug 477495, but it was originally about reftests logs and was closed once then reopened, so keeping this separate for now) 1) Using latest Nightly, open any large tinderbox log, eg a mochitest-other log: https://tbpl.mozilla.org/php/getParsedLog.php?id=12466251&tree=Mozilla-Inbound Expected: Log loads in a similar amount of time to Chrome on my fairly slow machine (<6-7 seconds until the spinner stops), browser remains responsive (again, like Chrome). Actual: Log takes 10-12 seconds to load, browser window is unresponsive & often hangs to the extent that the tab strip disappears/window goes opaque. --- This bug affects me every single day using TBPL. Philor has resorted to using Chrome to view the logs, since the current behaviour is just so frustrating. Profile using http://hg.mozilla.org/projects/profiling/rev/ca7362e9e9df & no other addons other than the profiler addon: http://people.mozilla.com/~bgirard/cleopatra/?report=AMIfv94U86IIpiVS17qIajTpXrF_hcFYS_AqTEN0_pwDRi9WYXytZtuSaT_d0IlGmakJVK22kYWu6ztKOE5atvVuI-OGa8dk3EcQVJc82h7yeBqsefEOqtnTBDuipN4Qy0YZowBOfDfmIe5eSamsJnDW2p_ShkxAMQ Edit: Also just seen a handful of other bugs (bug 542877, bug 515447), but all old & may not still be exactly the same root cause, so going to submit this anyway.
Comment 1•12 years ago
|
||
TIP: Use '*' to quickly expand all the nodes. The profiles shows that we're really taxing the font code. Perhaps someone can tell us if we're calling this code too much or if it's not performing well.
Comment 2•12 years ago
|
||
(In reply to Benoit Girard (:BenWa) from comment #1) > TIP: Use '*' to quickly expand all the nodes. Not sure what '*' means in this context. The ancient option-toggle on the disclosure triangle expands all nodes. Off to a separate room to chant "this tool is awesome beyond belief" a thousand times...
Comment 3•12 years ago
|
||
Code breakdown, all percentages are of total time: 76.4% BuildTextRuns 58.8% MakeTextRun 55.5% -- gfxFontGroup::InitTextRun 47.3% ---- gfxFontGroup::InitScriptRun 33.9% ------ gfxFont::SplitAndInitTextRun 22.7% -------- gfxFont::GetShapedWord 13.8% ---------- hb_shape (== harfbuzz shaping) 5.0% -------- base::Histogram::Add (!!!) 12.2% ------ gfxFontGroup::ComputeRanges (== font matching) 7.5% ---- gfxScriptItemizer::Next (!!!) 7.2% nsTextFrameUtils::TransformText (why???) 5.6% BuildTextRunScanner::SetupBreakSinksForTextRun So only 14% of the time is spent actually selecting and positioning glyphs, there's a lot of time spent in the overhead of scanning and assembling text runs. I don't think Histogram::Add should be taking 5%, that operation should be down in the noise. The code in gfxScriptItemizer::Next and nsTextFrameUtils::TransformText seem like good places to look for some easy optimization.
Reporter | ||
Comment 4•12 years ago
|
||
(In reply to Benoit Girard (:BenWa) from comment #1) > TIP: Use '*' to quickly expand all the nodes. Ah thank you I did not know that, I'd been expanding them by hand lol.
Comment 5•12 years ago
|
||
(In reply to John Daggett (:jtd) from comment #3) > Code breakdown, all percentages are of total time: > > 76.4% BuildTextRuns > 58.8% MakeTextRun > 55.5% -- gfxFontGroup::InitTextRun > 47.3% ---- gfxFontGroup::InitScriptRun > 33.9% ------ gfxFont::SplitAndInitTextRun > 22.7% -------- gfxFont::GetShapedWord > 13.8% ---------- hb_shape (== harfbuzz shaping) > 5.0% -------- base::Histogram::Add (!!!) > 12.2% ------ gfxFontGroup::ComputeRanges (== font matching) > 7.5% ---- gfxScriptItemizer::Next (!!!) > 7.2% nsTextFrameUtils::TransformText (why???) > 5.6% BuildTextRunScanner::SetupBreakSinksForTextRun > > So only 14% of the time is spent actually selecting and positioning glyphs, > there's a lot of time spent in the overhead of scanning and assembling text > runs. > > I don't think Histogram::Add should be taking 5%, that operation > should be down in the noise. This comes from the telemetry calls added in bug 707959, and I think it indicates that we should remove (or at least simplify) them. Currently, gfxFont::GetShapedWord updates two histograms in all cases, and a further two if it hits the cache. For cache hits (especially), this is a huge overhead. I'm not convinced the data this generates is worth the cost here. > The code in gfxScriptItemizer::Next and > nsTextFrameUtils::TransformText seem like good places to look for some > easy optimization. gfxScriptItemizer::Next has to look at every character in the text and retrieve its Unicode script property. That's a fairly simple lookup (a few array accesses), and I doubt we can gain much there. However, it also checks for "paired" characters (opening and closing parens, etc) so as to assign them to the same script run if possible. That uses a binary search of a list of 40+ character pairs. Maybe we could optimize that to use a faster lookup technique, at least for the ASCII range.
Comment 6•12 years ago
|
||
This cuts down the word-cache telemetry to just record hits and misses according to word length, with only a single call to Telemetry::Accumulate per GetShapedWord() instead of two or four calls. If it still seems too expensive, we could simplify further, e.g. by just recording the cache hit rate regardless of word length (thus eliminating the need to look up the appropriate bucket).
Attachment #631651 -
Flags: review?(jdaggett)
Comment 7•12 years ago
|
||
Here's a possible optimization for gfxScriptRunItemizer, by adding a 256-byte array so we can directly look up pair indexes for 8-bit character codes. I've pushed this to tryserver to see if there's any detectable effect there.
Comment 8•12 years ago
|
||
I'm tentatively making this block bug 763119 since the top functions are the same (FindFontForCharacter + gfxScriptItemizer::Next(...)).
Blocks: 763119
Comment 9•12 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #6) > Created attachment 631651 [details] [diff] [review] > patch, simplify the word-cache telemetry > > This cuts down the word-cache telemetry to just record hits and misses > according to word length, with only a single call to Telemetry::Accumulate > per GetShapedWord() instead of two or four calls. > > If it still seems too expensive, we could simplify further, e.g. by just > recording the cache hit rate regardless of word length (thus eliminating the > need to look up the appropriate bucket). I think the underlying problem is in the Histogram code, before juggling around our use of that code we should look at fixing whatever is causing the Histogram code to take up 5% (compare with all calls to hb_shape at 14%). That should be little more than bumping a counter.
![]() |
||
Comment 10•12 years ago
|
||
(In reply to John Daggett (:jtd) from comment #9) > I think the underlying problem is in the Histogram code, before juggling > around our use of that code we should look at fixing whatever is causing the > Histogram code to take up 5% (compare with all calls to hb_shape at 14%). > That should be little more than bumping a counter. Counter-bumping, plus deciding what histogram bucket into which to accumulate, plus some sanity-checking on the accumulated value (can't accumulate negative values, for instance). Call counts would be useful for figuring out what's going on here. Maybe you're just making a _lot_ of calls to Histogram::Add...
Comment 11•12 years ago
|
||
Spreadsheets using latest word caching data: Word caching per script https://docs.google.com/spreadsheet/ccc?key=0ArCKGq7OfNmMdDhhZXAwZ01GUF84YzdWdHgxZE03MWc Word caching by length https://docs.google.com/spreadsheet/ccc?key=0ArCKGq7OfNmMdDl5ZFJ6Y0dfd3dGWDk4YmtmbTl6ckE There's clearly a Latin bias among those enabling Telemetry, common + latin = 94% of all word cache lookups.
Comment 12•12 years ago
|
||
Comment on attachment 631651 [details] [diff] [review] patch, simplify the word-cache telemetry Even though the per-script data is useful, there's an already large Latin bias among Telemetry users so I think it's fine to trim the per-script metric. This should probably be checked in on another bug, it only affects those with Telemetry enabled and it's by no means even scratching the surface of the problem here.
Attachment #631651 -
Flags: review?(jdaggett) → review+
Comment 13•12 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #7) > Created attachment 631654 [details] [diff] [review] > patch, optimize getPairIndex in gfxScriptRunItemizer > > Here's a possible optimization for gfxScriptRunItemizer, by adding a > 256-byte array so we can directly look up pair indexes for 8-bit character > codes. I've pushed this to tryserver to see if there's any detectable effect > there. While this didn't have any clear impact on tryserver/talos, where text performance is only a small part of the overall score, it does make a significant difference when profiling layout of the tinderbox log from comment 0. Reloading that log from a locally-saved copy, it reduces the contribution of gfxScriptRunItemizer::Next from 15-16% of total time in gfxFontGroup::InitTextRun to about 9% of it. However, I also have another approach that I'd like to try; not requesting review here just yet, till I see how the alternative works out.
Comment 14•12 years ago
|
||
(In reply to John Daggett (:jtd) (Away 15-22 June) from comment #12) > Comment on attachment 631651 [details] [diff] [review] > patch, simplify the word-cache telemetry > > Even though the per-script data is useful, there's an already large Latin > bias among Telemetry users so I think it's fine to trim the per-script > metric. > > This should probably be checked in on another bug, it only affects those > with Telemetry enabled and it's by no means even scratching the surface of > the problem here. I've moved this to bug 763693, so we can keep track of it separately.
Comment 15•12 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #13) > (In reply to Jonathan Kew (:jfkthame) from comment #7) > > Created attachment 631654 [details] [diff] [review] > > patch, optimize getPairIndex in gfxScriptRunItemizer > > > > Here's a possible optimization for gfxScriptRunItemizer, by adding a > > 256-byte array so we can directly look up pair indexes for 8-bit character > > codes. I've pushed this to tryserver to see if there's any detectable effect > > there. > > While this didn't have any clear impact on tryserver/talos, where text > performance is only a small part of the overall score, it does make a > significant difference when profiling layout of the tinderbox log from > comment 0. Reloading that log from a locally-saved copy, it reduces the > contribution of gfxScriptRunItemizer::Next from 15-16% of total time in > gfxFontGroup::InitTextRun to about 9% of it. > > However, I also have another approach that I'd like to try; not requesting > review here just yet, till I see how the alternative works out. I've posted an alternative patch (with better performance) in bug 763703.
Updated•12 years ago
|
Attachment #631651 -
Attachment is obsolete: true
Comment 16•12 years ago
|
||
Comment on attachment 631654 [details] [diff] [review] patch, optimize getPairIndex in gfxScriptRunItemizer Obsoleting the gfxScriptItemizer patch here, as bug 763703 provides a better approach.
Attachment #631654 -
Attachment is obsolete: true
Reporter | ||
Comment 17•12 years ago
|
||
I tried getting another profile now that the 3 dependant bugs have landed, but the profiler gets stuck on 'retrieving profile' :-(
Reporter | ||
Comment 18•12 years ago
|
||
(Sorry submitted too soon) I don't suppose someone else can grab a fresh profile whilst I see if I can get the profiler working?
Reporter | ||
Updated•12 years ago
|
Whiteboard: [sheriff-want]
Reporter | ||
Comment 19•12 years ago
|
||
(In reply to Ed Morley [:edmorley] from comment #18) > I don't suppose someone else can grab a fresh profile whilst I see if I can > get the profiler working? Seem to work ok now, managed to get a new profile with 2012-07-26 Nightly: http://people.mozilla.com/~bgirard/cleopatra/?report=613ff5ad2e74a8ca602848c3a53cb02100ee0c33 Would someone be able to interpret it again for me? :-)
Comment 20•12 years ago
|
||
Well...the single most expensive thing seems to be harfbuzz shaping (glyph layout). We're hoping to optimize that somewhat over the coming months; the focus has been on correctness first, with performance tuning to follow. Comparing to comment 3, we can see that gfxFontGroup::ComputeRanges and gfxScriptItemizer::Next have improved, as they're contributing significantly smaller percentages of the profile now; and the telemetry histogram is no longer dragging us down. So bugs 763693, 763703 and 764005 have helped somewhat, but obviously we still have lots to do.
Reporter | ||
Comment 21•12 years ago
|
||
Thank you for taking a look - looking forward to the harfbuzz shaping optimisations over the coming months :-)
Comment 22•12 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #20) > Well...the single most expensive thing seems to be harfbuzz shaping (glyph > layout). We're hoping to optimize that somewhat over the coming months; the > focus has been on correctness first, with performance tuning to follow. The time spent in shaping code in the original profile (in the description, breakdown listed in comment 3) was less than 14%. What's the percentage of time spent in harfbuzz that you're seeing here?
Comment 23•12 years ago
|
||
Ed's profile from comment #19 has 13.2% under hb_shape; I'm hopeful that upcoming harfbuzz optimizations will push that below 10% soon. Other areas that show up as expensive include nsBlockFrame::ResolveBidi (8.9%) and nsTextFrameUtils::TransformText (5.7%), the majority of which is gfxSkipCharsBuilder.
Comment 24•12 years ago
|
||
Ed, could you please re-profile once you have a build that includes bug 780409 (just pushed to inbound)? This includes some harfbuzz performance work that I think may give a measurable improvement here.
Reporter | ||
Comment 25•12 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #24) > Ed, could you please re-profile once you have a build that includes bug > 780409 (just pushed to inbound)? This includes some harfbuzz performance > work that I think may give a measurable improvement here. Done :-) http://people.mozilla.com/~bgirard/cleopatra/?report=c2e6bea3647461c0675e59441b78c0f5c409ac0d Win32 Nightly built from http://hg.mozilla.org/mozilla-central/rev/1bbc0b65dffb
Comment 26•12 years ago
|
||
So the contribution from hb_shape() is down from 13.2% to 9.3% of your total "jank" time... that's a good step in the right direction, though obviously we'd still like to go further. I wonder about bidi resolution; if we could (cheaply) detect situations where we can bypass that completely because there are no RTL characters or directional overrides involved, maybe we could get a win there.
Comment 27•12 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #26) > I wonder about bidi resolution; if we could (cheaply) detect situations > where we can bypass that completely because there are no RTL characters or > directional overrides involved, maybe we could get a win there. I'm sure that that could be a significant win, and I've tried to do it in the past, but it's trickier than it appears. The chief problem is that when detecting those situations there can be false positives that cause mis-ordering, for example when editing text and deleting RTL characters.
Comment 28•12 years ago
|
||
Re bidi optimization, we do that in Pango. If there are no RTL strong characters we assign LTR to all (I'm skipping details). But note that a few years ago I found an unintended glitch in the Bidi algorithm that allows RTL runs to appear out of text with no strong RTL. The bug is in the following rule: "N1. A sequence of neutrals takes the direction of the surrounding strong text if the text on both sides has the same direction. European and Arabic numbers act as if they were R in terms of their influence on neutrals. Start-of-level-run (sor) and end-of-level-run (eor) are used at level run boundaries." So, if there's a neutral in between two numerals, and rule W7 has not applied: "W7. Search backward from each instance of a European number until the first strong type (R, L, or sor) is found. If an L is found, then change the type of the European number to L." then we get in trouble. I don't have the full details off the top of my head, but can dig it if there's interest.
Comment 29•12 years ago
|
||
I don't see that that's a glitch: even if there are no strong characters sor will always be found, so the only case where you can get RTL with no strong RTL characters (counting RLO and RLE as strong RTL for these purposes) will be if the base direction is RTL, no?
Comment 30•12 years ago
|
||
Simon, Here's my original report from 2009: ============= The rule N1 from http://www.unicode.org/reports/tr9/#N1 reads: """ N1. A sequence of neutrals takes the direction of the surrounding strong text if the text on both sides has the same direction. European and Arabic numbers act as if they were R in terms of their influence on neutrals. Start-of-level-run (sor) and end-of-level-run (eor) are used at level run boundaries. R N R → R R R L N L → L L L R N AN → R R AN AN N R → AN R R R N EN → R R EN EN N R → EN R R Note that any AN or EN remaining after W7 will be in an right-to-left context. """ Bug 1: The text of the first paragraph says "European and Arabic numbers act as if they were R in terms of their influence on neutrals." It is not clear what this means. There are at least the following two possible interpretations: * The text is trying to loosely describe the logic behind the six rules that follow and should not be taken literally. In particular, the sequences "AN N AN", "EN N EN", "AN N EN", and "EN N AN" are NOT processed as if AN and EN act like an R. This is most probably what the rule was meant to be. The text however is definitely wrong. My colleague's testings suggest that this is what OS X implements. * Before applying the 6 rules listed, temporarily convert any AN or EN type to R, then proceed to apply the rules. This reading is what I implemented in FriBidi years ago. I just checked and the Java reference implementation also reads it like this. I didn't check the code but I'm fairly sure that the C++ reference implementation does the same. The problems with reading it like this are numerous: - It conflicts with the 6 rules listed as there will be no EN and AN anymore and the rules should be simplified to only: R N R → R R R L N L → L L L - The major problem with this approach however is that it can produce strongly RTL characters in an otherwise LTR paragraph. This is in consistent with the following paragraph from Implementation Notes: """ One of the most effective optimizations is to first test for right-to-left characters and not invoke the Bidirectional Algorithm unless they are present. """ Here is the test case: <U+0041,U+0661,U+002D,U+0662> That's Latin capital letter A, Arabic digit 1, hyphen-minus, Arabic digit 2. The original bidi types are <L,AN,ES,AN>, and they reach rule N1 as <L,AN,N,AN>, at which point this reading of the rule N1 changes them to <L,R,R,R> and things go south from there. Bug 2: The last line in rule N1 reads: "Note that any AN or EN remaining after W7 will be in an right-to-left context." This is wrong as my example above shows. The "L,AN" sequence reaches N1 fine and it's NOT in a "right-to-left context", whatever that means. That sentence should plain be removed. =================== Here's the draft that this got applied to the standard: http://www.unicode.org/reports/tr9/tr9-20.html This is what you wrote in response to my report back then: ==================== In bug 1, both your possible interpretations assume that the six lines from starting "R N R → R R R" are rules. I had always interpreted them as examples of the results of applying the rule described in the text above. In practice this is equivalent to your second interpretation, but without the objection that it conflicts with the rules: temporarily convert any AN or EN type to R, then apply the rules "R N R -> R R R" and "L N L -> L L L". In any case the six lines in questions can't be interpreted literally as exhaustive rules: at the least "N" has to be understood as "one or more N". I agree that this interpretation is inconsistent with the text: "One of the most effective optimizations is to first test for right-to-left characters and not invoke the Bidirectional Algorithm unless they are present." That text is imprecise: it should say something like "first test for character types R, AL, RLE, RLO and AN, and not invoke the Bidirectional Algorithm unless either they are present, or a higher-level protocol has specified a right-to-left paragraph level". For what it's worth, this is more or less how Mozilla implements this optimization, though rather than explicitly testing for R and AL we just test for the blocks that are listed in the roadmaps as default right-to-left. In bug 2, "any AN or EN remaining after W7 will be in an right-to-left context" should probably just read "any EN ..." (and "an right-to-left" should read "a right-to-left"). As far as I remember, this line was added because people often question why N1 says that "European ... numbers act as if they were R". =================
Comment 31•12 years ago
|
||
(In reply to Simon Montagu from comment #27) > (In reply to Jonathan Kew (:jfkthame) from comment #26) > > I wonder about bidi resolution; if we could (cheaply) detect situations > > where we can bypass that completely because there are no RTL characters or > > directional overrides involved, maybe we could get a win there. > > I'm sure that that could be a significant win, and I've tried to do it in > the past, but it's trickier than it appears. The chief problem is that when > detecting those situations there can be false positives that cause > mis-ordering, for example when editing text and deleting RTL characters. If removing characters from a textnode is a problem, we can probably optimize the general case and have the textnode set a flag or something when its data is trimmed down, and also set the same flag on the sibling textnodes when a textnode including an RTL character gets removed from the tree.
Reporter | ||
Updated•12 years ago
|
Keywords: sheriffing-P1
Whiteboard: [sheriff-want]
Reporter | ||
Comment 32•12 years ago
|
||
Does anyone have any suggestions for ways in which we can change the markup to avoid the hangs?
Reporter | ||
Updated•11 years ago
|
OS: Windows 7 → All
Hardware: x86 → All
Comment 33•11 years ago
|
||
Can anyone from the perf time spend some time on this in Q4? Thanks.
Updated•11 years ago
|
Flags: needinfo?(vdjeric)
Comment 34•11 years ago
|
||
s/perf time/perf team/
Comment 35•11 years ago
|
||
At the recent work-week, smontagu was experimenting with a patch that might help significantly by allowing us to bypass much of the bidi work. Simon, is that looking like it may be a workable way forward? If so, it'd be awesome to get it to a landable state.
Flags: needinfo?(smontagu)
Comment 36•11 years ago
|
||
I did some investigation here and the main problem here is that we are shaping the *entire* contents of the pre element contained in logfiles (or the entire contents of a raw text file) within a single text run. This means that until that's done no events will be processed. While perf improvements would definitely be helpful, we really need to somehow chunk the processing of large text elements so that the layout code can better deal with interrupting reflow to prevent UI hangs.
![]() |
||
Comment 37•11 years ago
|
||
(In reply to John Daggett (:jtd) from comment #36) > I did some investigation here and the main problem here is that we are > shaping the *entire* contents of the pre element contained in logfiles (or > the entire contents of a raw text file) within a single text run. This > means that until that's done no events will be processed. While perf > improvements would definitely be helpful, we really need to somehow chunk > the processing of large text elements so that the layout code can better > deal with interrupting reflow to prevent UI hangs. Could we change the log generator to chunk the <pre> elements manually as a stopgap?
Comment 38•11 years ago
|
||
(In reply to Nathan Froyd (:froydnj) from comment #37) > Could we change the log generator to chunk the <pre> elements manually as a > stopgap? Hmm, that would be interesting to try I think. I did a quick test throwing in </pre><pre> every 30 lines and that definitely splits up the text runs into more reasonable sizes without the "one big text run" situation. This may be a bz question or someone who understands the interaction of gzip/chunked loading on layout. With an uncompressed version of a logfile I can actually start scrolling fairly quickly but with the existing gzip/chunked version served by tbpl the file displays but no interaction is possible for several seconds.
![]() |
||
Comment 39•11 years ago
|
||
(In reply to John Daggett (:jtd) from comment #38) > (In reply to Nathan Froyd (:froydnj) from comment #37) > > Could we change the log generator to chunk the <pre> elements manually as a > > stopgap? > > Hmm, that would be interesting to try I think. I did a quick test throwing > in </pre><pre> every 30 lines and that definitely splits up the text runs > into more reasonable sizes without the "one big text run" situation. > > This may be a bz question or someone who understands the interaction of > gzip/chunked loading on layout. With an uncompressed version of a logfile I > can actually start scrolling fairly quickly but with the existing > gzip/chunked version served by tbpl the file displays but no interaction is > possible for several seconds. Just for clarification: Does the uncompressed version you're talking about here have manually chunked <pre> elements? If so, you're comparing the uncompressed, "hand-optimized" version of the log to the version coming from tbpl, which we already know is slow. Is that correct, or am I missing something here?
Comment 40•11 years ago
|
||
(In reply to Nathan Froyd (:froydnj) from comment #39) > Just for clarification: Does the uncompressed version you're talking about > here have manually chunked <pre> elements? If so, you're comparing the > uncompressed, "hand-optimized" version of the log to the version coming from > tbpl, which we already know is slow. Is that correct, or am I missing > something here? Yes, that's right, I was comparing the content served by tinderbox in gzip/chunked form with an uncompressed file to which I added '</pre><pre>' every 30 lines: http://people.mozilla.org/~jdaggett/bigtext/mochitest1-win8-opt-sliced.html With the tinderbox version, the first portion of the log loads but then the window locks up until the load completes. With the uncompressed, sliced version the logfile loads and it's possible to start scrolling, the browser doesn't hang until the load completes. It would be interesting to see if this file, served up the same way tinderbox files are served in gzip/chunked form, would avoid the hang or not. While reducing the overall load time is certainly important I think the most important thing here is to avoid the hang.
Comment 41•11 years ago
|
||
(In reply to Jonathan Kew (:jfkthame) from comment #35) > At the recent work-week, smontagu was experimenting with a patch that might > help significantly by allowing us to bypass much of the bidi work. That was a variation of the patch in bug 646359, and I still have the same problem that I did before with repainting regressions in textareas.
Depends on: 646359
Flags: needinfo?(smontagu)
Comment 42•11 years ago
|
||
(In reply to Brian R. Bondy [:bbondy] from comment #33) > Can anyone from the perf team spend some time on this in Q4? Thanks. Unless I'm misunderstanding the discussion in the bug so far, it sounds like the fonts code needs optimizing. No one on the perf team has the requisite knowledge of fonts & font code
Flags: needinfo?(vdjeric)
Reporter | ||
Updated•10 years ago
|
Summary: Hangs viewing tinderbox logs → Hangs viewing TBPL's parsed logs (large blocks of text inside <pre>)
Reporter | ||
Updated•10 years ago
|
Summary: Hangs viewing TBPL's parsed logs (large blocks of text inside <pre>) → Hangs viewing TBPL's parsed 'full' logs (large blocks of text inside <pre>)
Reporter | ||
Comment 43•7 years ago
|
||
Mass-closing old bugs I filed that have not had recent activity/no longer affect me.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → INCOMPLETE
You need to log in
before you can comment on or make changes to this bug.
Description
•