Closed Bug 1269698 (gdoc_read_utf8_txt_1(88%)) Opened 9 years ago Closed 3 years ago

[perf][google suite][google docs] 88.01%(7750 ms) slower than Chrome when opening 1 page UTF8 content

Categories

(Core :: JavaScript Engine, defect, P2)

45 Branch
x86
Linux
defect

Tracking

()

RESOLVED INACTIVE
Performance Impact ?
Tracking Status
platform-rel --- -

People

(Reporter: sho, Unassigned)

References

(Depends on 1 open bug, Blocks 2 open bugs)

Details

(Keywords: perf, Whiteboard: [platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs])

User Story

You can find all test scripts on github link:
https://github.com/mozilla-twqa/hasal

And you can also find the running script name in comments
for example: 
test scripts: test_chrome_gdoc_create_txt_1
then you can specify the test script name on suite.txt and run it

Attachments

(2 files)

# Test Case STR 1. Launch the browser with blank page 2. input the google doc url with 1 page txt content (UTF-8) 3. close the browser # Hardware OS: Ubuntu 14.04 LTS 64-bit CPU: i7-3770 3.4GMhz Memory: 16GB Ram Hard Drive: 1TB SATA HDD Graphics: GK107 [GeForce GT 640]/ GF108 [GeForce GT 440/630] # Browsers Firefox version: 45.0.2 Chrome version: 50.0.2661.75 # Result Browser | Run time (median value) Firefox | 16555.5556 ms Chrome | 8805.5556 ms
Product: Firefox → Core
Version: unspecified → 45 Branch
User Story: (updated)
# Profiling Data: https://cleopatra.io/#report=c9ad18b88e6461fb7848626e224409e93fc70031&filter=[{%22type%22%3A%22RangeSampleFilter%22,%22start%22%3A32390,%22end%22%3A38117}] # Performance Timing: http://goo.gl/zB9Mqo { "navigationStart": 1463119205801, "unloadEventStart": 0, "unloadEventEnd": 0, "redirectStart": 0, "redirectEnd": 0, "fetchStart": 1463119205806, "domainLookupStart": 1463119205806, "domainLookupEnd": 1463119205806, "connectStart": 1463119205806, "connectEnd": 1463119205806, "requestStart": 1463119205809, "responseStart": 1463119206122, "responseEnd": 1463119206122, "domLoading": 1463119206124, "domInteractive": 1463119207332, "domContentLoadedEventStart": 1463119208255, "domContentLoadedEventEnd": 1463119208256, "domComplete": 1463119210685, "loadEventStart": 1463119210685, "loadEventEnd": 1463119210692 } #Test Script: https://github.com/Mozilla-TWQA/Hasal/blob/master/tests/test_firefox_gdoc_read_utf8_txt_1.sikuli/test_firefox_gdoc_read_utf8_txt_1.py
QA Contact: fatseng
From Gecko Profiling data, the Range [32390, 38188]: 5679 100% Startup::XRE_Main 1595 28.1% ├─ nsInputStreamPump::OnInputStreamReady 1419 25.0% │ ├─ nsInputStreamPump::OnStateStop > ref: bug 1267971 122 2.2% │ ├─ nsInputStreamPump::OnStateTransfer 53 0.9% │ └─ nsInputStreamPump::OnStateStart 1808 19.0% ├─ nsHtml5TreeOpExecutor::RunFlushLoop 933 16.4% │ ├─ nsJSUtils::EvaluateString > ref: bug 1270351 869 15.3% ├─ js::RunScript │ └─ ...so on └─ ...so on
It looks like that nsInputStreamPump::OnStateStop and nsHtml5TreeOpExecutor::RunFlushLoop spend much time in running java script.
Whiteboard: [platform-rel-Google][platform-rel-GoogleDocs]
platform-rel: --- → ?
Flags: needinfo?(tlee)
Severity: normal → critical
Priority: -- → P1
Flags: needinfo?(overholt)
Flags: needinfo?(kchen)
Flags: needinfo?(bugs)
Maybe Henri has thoughts here? Or Jonathan?
Flags: needinfo?(overholt)
Flags: needinfo?(jfkthame)
Flags: needinfo?(hsivonen)
(In reply to Andrew Overholt [:overholt] from comment #5) > Maybe Henri has thoughts here? nsHtml5TreeOpExecutor::RunFlushLoop() is responsible for executing scripts given as <script> tags, so it seems to me that this profile just shows the page(s) having scripts that take a long time to run when first parsed/executed.
Flags: needinfo?(hsivonen)
Specifically, inline as in not <script src> but <script> // something </script>.
ni?'ing myself to remember to poke at it.
Flags: needinfo?(kyle)
platform-rel: ? → +
It looks like all the time is being spent in the JS engine.
# Hardware OS: Windows 7 CPU: i7-3770 3.4GMhz Memory: 16GB Ram Hard Drive: 1TB SATA HDD Graphics: GK107 [GeForce GT 640]/ GF108 [GeForce GT 440/630] # Browsers Firefox version: 47 Chrome version: 51.0.2704.103 # Result Browser | Run time (median value) Firefox | 9811 ms Chrome | 5214 ms Comparing to the old 88.01% slower results, results in win7 shows shows almost no difference(88.17%). However, please note that both browsers have a better performance in win7 than in Ubuntu.
Alias: gdoc_read_utf8_txt_1
Alias: gdoc_read_utf8_txt_1 → gdoc_read_utf8_txt_1(88%)
do we have more findings on this case?
Flags: needinfo?(overholt)
Flags: needinfo?(kyle)
Flags: needinfo?(kchen)
Flags: needinfo?(bugs)
Elsewhere I had asked if Chrome was being sent different content than Firefox is. It looks like the answer is largely no when we're spoofing the UA string but can we please manually verify that using Chrome itself? Naveed told me he and the JS team would really like to see a comparison of the % of time Chrome is spending in V8's interpreter vs the % of time Firefox is spending in SpiderMonkey's interpreter. Is it possible to compare the browsers' profiler output?
Flags: needinfo?(overholt)
Flags: needinfo?(jfkthame)
Flags: needinfo?(bchien)
(In reply to Andrew Overholt [:overholt] from comment #12) > Elsewhere I had asked if Chrome was being sent different content than > Firefox is. It looks like the answer is largely no when we're spoofing the > UA string but can we please manually verify that using Chrome itself? Could the data being sent to Chrome be served via quic and that's different?
Component: General → JavaScript Engine
(In reply to Andrew Overholt [:overholt] from comment #13) > (In reply to Andrew Overholt [:overholt] from comment #12) > > Elsewhere I had asked if Chrome was being sent different content than > > Firefox is. It looks like the answer is largely no when we're spoofing the > > UA string but can we please manually verify that using Chrome itself? > > Could the data being sent to Chrome be served via quic and that's different? That is a good question. Does Chrome get the data faster? Profiler wouldn't really show that.
Andrew(In reply to Andrew Overholt [:overholt] from comment #12) > Elsewhere I had asked if Chrome was being sent different content than > Firefox is. It looks like the answer is largely no when we're spoofing the > UA string but can we please manually verify that using Chrome itself? > > Naveed told me he and the JS team would really like to see a comparison of > the % of time Chrome is spending in V8's interpreter vs the % of time > Firefox is spending in SpiderMonkey's interpreter. Is it possible to compare > the browsers' profiler output? do you have steps to disable on firefox and chrome? We can try to collect information from test framework in the future.
Flags: needinfo?(bchien) → needinfo?(overholt)
Naveed, see comment 15.
Flags: needinfo?(overholt) → needinfo?(nihsanullah)
I do not understand the question. Disable what? quic? From what I understand, Naveed asked if you can look at the profiler output to tell what percentage of time is spent in the JS engine for Firefox vs Chrome. I don't know how to do that with Cleopatra, but I do see a "Javascript only" checkbox under "View Options", so perhaps it does have some notion of when JS is running? The firefox devtools performance tab seems to have detailed knowledge of what is running, but I'm not sure how to get the time spent running JS out of it. It would be informative to get tracelogger output for this test run. https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Hacking_Tips#Using_the_TraceLogger_(JS_shell_browser) Apparently, it is able to log the output of the full browser. It seems to be possible with TLLOG=Default TLOPTIONS=EnableMainThread,EnableOffThread,EnableGraph $firefox ...args... Would it be possible to get a run with those variables set? It appears to me that tracelogging is enabled by default, and in fact the toplevel configure has no way to disable it. The tracelogger will write a whole bunch of files to /tmp/tl-* on non-Windows, the current directory on Windows. If it's not too large, please upload an archive here. https://github.com/h4writer/tracelogger has a README.md that describes how to view these.
Attached file tracelogger zip file for this test (deleted) —
Hi Steve, Base on the tracelogger instruction, I run the test again with tracelogging enabled, and zip all logs into one zip file. Please check the attachment 8777244 [details]. Hope this will help, feel free to ask if there is any question. Shako
Sorry, I finally looked at this today. Unfortunately, all of those traces were empty. I will have to investigate whether you need a special build or something. Just to confirm, you had both environment variables set? TLLOG=Default TLOPTIONS=EnableMainThread,EnableOffThread,EnableGraph I will try to learn more about this process tomorrow, but I'm going to ni? h4writer in hopes that he knows what might be going on.
Flags: needinfo?(hv1989)
(In reply to Steve Fink [:sfink] [:s:] from comment #20) > Sorry, I finally looked at this today. Unfortunately, all of those traces > were empty. I will have to investigate whether you need a special build or > something. Just to confirm, you had both environment variables set? > > TLLOG=Default > TLOPTIONS=EnableMainThread,EnableOffThread,EnableGraph > > I will try to learn more about this process tomorrow, but I'm going to ni? > h4writer in hopes that he knows what might be going on. That looks correct to me. You don't need a special build or something. That should do it.
Flags: needinfo?(hv1989)
gweng - it would be great if you could get the tracelogger output for this one too. Thanks! bchien - your question to Naveed was unclear to me in comment 15. If you could restate it, I will figure out how to answer.
Flags: needinfo?(nihsanullah) → needinfo?(gweng)
I finally figured out how to dig the gdocs urls out of the hasal repo. For example, if I'm not mistaken, this bug refers to https://docs.google.com/document/d/1ktU9DnreProcMXa0L5PBCateh-xJYbKqsf-3TgwAJhI/edit (The har file attached seems to use something different? It says document not found if I load that url.) On my test runs (with FF47), I'm seeing a *massive* difference between e10s and non-e10s. If I start the gecko profiler, reload, and then analyze it, I see about 1000 samples with non-e10s, and around 41000 with e10s, and gdocs says "Reconnecting..." while it's loading. I didn't time it myself, but it clearly takes much much longer. Cleopatra says that 85% of the time is spent in __poll_nocancel. https://cleopatra.io/#report=5a894f82b2d1aafa16727c6dafe1083e36920ef9&invertCallback=true A non-e10s version is at https://cleopatra.io/#report=12456725b354320ba92e1d5f8755b46a087a221e&filter=%5B%7B%22type%22%3A%22RangeSampleFilter%22,%22start%22%3A75579,%22end%22%3A76611%7D%5D where the polling is completely absent. Looking at the e10s case, the Firefox DevTools Performance chart runs out of buffer space before it manages to load, but it shows 4.5s in the edit:12 script ... 2.9s in the edit:12 script ... 4.25s in a readystatechange DOM event handler, running script ... 1.8s paint ... 1.6s paint Quite a bit of the time in that edit:12 script is actually in layout -- about half of the time in that second 2.9s chunk. But the other half of the 2.9s chunk and the majority of the 4.5s chunk appear to simply be running the script. The Perf view doesn't display parsing vs execution. The tracelogger output for these is problematic. It seems to be truncating its string dictionary in e10s mode, which might be why tracelogger script names and cleopatra script names are not matching up. I think I'll have to fix that problem. Though tracelogger shouldn't be necessary -- in theory, the cleopatra profiler ought to have enough information to delve into this. I just generated another non-e10s profile, and this time it took longer to load. About 6000 samples this time, midway between my previous 1000 (non-e10s) and 41000 (e10s). Still no polling. A fair amount of pthread_cond_wait, though. (Not enough to make up all the difference.) It is certainly the case that e10s is doing *way* worse here.
Steve, do you see the same results in FF48? I tested with that and I don't see an appreciable difference between e10s and non-e10s. If anything, e10s is faster. I just loaded the page, opened the console, and typed: "window.performance.timing.domComplete - window.performance.timing.navigationStart". Both give about 3.5 seconds for me. I didn't test with the profiler. Maybe that has some effect?
Greg told me about steve's investigation. I also found the similar things in another bug. I had planned to measure the cost of IPC. I had found some APIs; like XHR, would use sync IPC. The solution is to remove these sync IPC messages. And, I found there are sync reflow in the script. They are to get width and height of some DIVs. I don't look into the JS code yet, but I guess they are to measure dimensions of of a text after reflow, or something like that. For now, what I see that gecko build the whole frame tree, and reflow everything that is pending. But, I think Gecko need to reflow only ancestor frames and the content of the target DIVs.
Flags: needinfo?(tlee)
Reply to comment 23, Hi Steve, Hasal framework will clone the template document before running the test and remove it after test finish, so what you found here is the template document, and the url within har file is the clone target.
(In reply to Thinker Li [:sinker] PTO Sept 1 ~ 14 from comment #25) > Greg told me about steve's investigation. I also found the similar things > in another bug. What bug? > I had planned to measure the cost of IPC. I had found some APIs; like XHR, > would use sync IPC. The solution is to remove these sync IPC messages. Where does XHR use sync IPC?
(In reply to Bill McCloskey (:billm) from comment #27) > (In reply to Thinker Li [:sinker] PTO Sept 1 ~ 14 from comment #25) > > Greg told me about steve's investigation. I also found the similar things > > in another bug. > > What bug? Bug 1269690. > > > I had planned to measure the cost of IPC. I had found some APIs; like XHR, > > would use sync IPC. The solution is to remove these sync IPC messages. > > Where does XHR use sync IPC? xhr.open() would cause a sync message to contructor HttpChannelParent. I don't how much relative code like this in gdoc cases. I will open another bug to study it.
I spent some time on Bug 1269690 and got the same result about e10s, although they're different bugs. Since we now have different result according to Comment 24, maybe we should have a complete test for different versions w/ & w/o e10s by ES team. And as I mentioned in Bug 1269690, my modified version pulled out the inline scripts as 4 files. Even so, the "script tag" section still took almost the same amount of time to complete, and the difference between w/ & w/o e10s also occur. So that's why I feel a more detailed examination is necessary.
Flags: needinfo?(gweng) → needinfo?(sho)
Depends on: 1296160
Depends on: 1296161
If you are profiling e10s with the GeckoProfiler addon then I recommend using a build with this code commented out: https://dxr.mozilla.org/mozilla-central/source/toolkit/components/addoncompat/RemoteAddonsChild.jsm#406-411 The GeckoProfiler uses the add-sdk panel module which triggers a synchronous RPC call for every element inserted into the DOM.
(In reply to Thinker Li [:sinker] PTO Sept 1 ~ 14 from comment #28) > > Where does XHR use sync IPC? > > xhr.open() would cause a sync message to contructor HttpChannelParent. > I don't how much relative code like this in gdoc cases. I will open another > bug to study it. I'm not sure what you mean. That constructor is marked async.
(In reply to Bill McCloskey (:billm) from comment #31) > I'm not sure what you mean. That constructor is marked async. You are right! I make a mistake here.
Reply to comment 28, Hi Greg, We have done some test on e10 before, you can reference the report below: https://goo.gl/rw3JUm
Flags: needinfo?(sho)
I'm wondering if we can have some Windows result? Since according to my finding, it shows that on Windows the gap between e10s and non-e10s is pretty small and even negligible. However, without your test environment and method I cannot make any conclusion.
Flags: needinfo?(sho)
Sure, we can have a test against e10s and non-e10s on windows for this bug. Will update to you the result when we finish the test
Flags: needinfo?(sho)
mike will help on this test
Flags: needinfo?(mlien)
(In reply to Bill McCloskey (:billm) from comment #24) > Steve, do you see the same results in FF48? I tested with that and I don't > see an appreciable difference between e10s and non-e10s. If anything, e10s > is faster. I just loaded the page, opened the console, and typed: > "window.performance.timing.domComplete - > window.performance.timing.navigationStart". Both give about 3.5 seconds for > me. I didn't test with the profiler. Maybe that has some effect? I tried testing with my Nightly 51. With e10s, it was even worse: https://cleopatra.io/#report=64e2579cfb76e1ccea550561806b306c911bc776&invertCallback=true&selection=%22(total)%22,107,106,104,93,90,11,4,1 But from the profile, that looks like something involving paint and sync. All the time is in pthread_cond_wait, called by Paint, called by nsRefreshDriver::Tick, if I'm reading it right. Then again, that browser window is being horribly sluggish on the Cleopatra page too, so I'm not sure how much to trust it. This is with the profiler running, but after . Though I just restarted with the profiler disabled, and I'm still getting 65 seconds (with your expression above). Something is horribly wrong. (My computer isn't especially active.)
As billm suggested to me on IRC, my slowdowns were due to using accelerated graphics on linux. I no longer have the horrible 60 second loads.
A dump of a set of timings I just did: - Timings: - all with JSGC_DISABLE_POISONING=1 - all with TL off - window.performance.timing.domComplete - window.performance.timing.navigationStart - and with devtools window closed (seems slower with it open) - non-e10s - with profiler: 4130ms 4366ms 4516ms 4130ms = https://cleopatra.io/#report=bcafe60888c888f5261ea79a2b18d800580493ff - profiler paused: 4085ms 8036ms (saw a network thing pop up) 3902ms 4330ms 4429ms - e10s - with profiler: 5095ms 4273ms 4690ms 8337ms (network) 3789ms 4690ms = https://cleopatra.io/#report=c1afe580c1be80ec7f4e247a2db8d422d002d8c8 - profiler paused: 5196ms 9796ms 4309ms - profiler disabled: 4531ms 3626ms 5401ms - google-chrome-stable: 6006ms 1086ms 9ms 3313ms 283ms Chrome was strange. Interacting with Chrome's chrome (the devtools command line etc.) felt very sluggish. I thought the page reload felt better -- not just in terms of time, but in the batching of UI elements and page content appearing -- but when I compared them side by side, I couldn't tell a difference. You can see that the window.performance.timing.domComplete - window.performance.timing.navigationStart numbers are wildly erratic. for Chrome I don't know what they're measuring; all of those reloads felt about the same to me. I can't really see anything in those numbers, other than saying that there are no clear slowdowns in any of the scenarios. And that the network introduces a huge amount of variance. I think I'll give up on the big picture stuff and look more closely at script parsing and execution for now.
(In reply to Greg Weng [:snowmantw][:gweng][:λ] from comment #34) > I'm wondering if we can have some Windows result? Since according to my > finding, it shows that on Windows the gap between e10s and non-e10s is > pretty small and even negligible. However, without your test environment and > method I cannot make any conclusion. From our 30 times tests on try sever's build: https://archive.mozilla.org/pub/firefox/try-builds/gweng@mozilla.com-85df25f4411f2137508eed87c690cc1fda79003c/try-win64/ for windows, e10s and non-e10s only has 150ms diff from median time. We also get the firefox performance recording data, please refer to below result link for more detailed information: https://drive.google.com/open?id=0BwkEhia_D6l_VDlQR2hkVDRQUkU
Flags: needinfo?(mlien)
Ugh, I'm having trouble digging down to find the problem on my machine. Given that the window.performance.timing numbers on Chrome are useless, I used a stopwatch and found Firefox to be *faster*. But it's tough to figure out when to stop timing, and the network is involved. I attempted to just look at what is running via both browsers' devtools. - Chrome - 3 big chunks (totaling 2370ms) - "Parse HTML", total 847ms - mainly 2 script runs of edit:12 - 257ms - 289ms - "Function Call" in i18n_kix_core.js:2044 duration 1000ms - "XHR Ready State Change" calling w.jGa at i18n_kix_core.js:628 523ms - Firefox (3512ms) - manually looking through waterfall: - i18_kix_core.js:1 135ms - edit:12 298ms (kix_core.js:858) - edit:12 662ms (kix_core.js:105) - readystatechange 1145ms (callback set from kix_core:2044) - readystatechange 838ms - setTimeout 436ms - non-inverted call tree - 2090ms in kix_core.js:2030 - 1007ms in w.jGa at kix_core.js:628 - 776ms in w.dispatchEvent kix_core.js:105 calling w.rC:107 The two runs of edit:12 seem to match up. Maybe. And both call w.jGa. The second edit:12 script was 662ms on Firefox, 289ms on Chrome. So that might be worth looking into. w.jGa is Fx=1007ms Chrome=523ms, but it's harder to tell whether those really match up (Fx might be calling it multiple times or something). If you can still see a larger difference, and it persists with the devtools open, it would be very interesting to compare the times of these script invocations using the devtools. I copied the document to avoid accidentally messing it up, so I am using https://docs.google.com/document/d/14xhE4RtUOnqjXGR7JpOuEl2vMMWUPEu0iKrF3j1-iLI/edit
Btw, if you're profiling using Nightlies or Aurora. javascript.options.asyncstack makes all profiling useless. (See also bug 1280819)
(In reply to Olli Pettay [:smaug] (vacation Aug 25-28) from comment #42) > Btw, if you're profiling using Nightlies or Aurora. > javascript.options.asyncstack makes all profiling useless. (See also bug > 1280819) Thanks, I added that to https://developer.mozilla.org/en-US/docs/Mozilla/Benchmarking
Filed https://bugs.chromium.org/p/chromium/issues/detail?id=641089 for the Chrome window.performance.timing issue.
Jan, could you take a look at the scripts running in this test case? I am going to be looking into the GC for this bug and a few others, since they're all high. (I suppose it might be due to CCWs, which I could try terrence's patch for.) http://people.mozilla.org/~sfink/data/tl-1269698.tar.xz is a tracelogger archive of this run (including startup). Check out the tracelogger repo, unpack this, run |python <tracelogger-dir>/website/server.py|, navigate to http://localhost:8000/tracelogger.html. And then watch your browser melt; I'm probably going to write something to trim the traces down to make them more manageable. What I'm seeing here is 33% of the time spent in baseline for the page load portion (37%, if you omit the internal tracelogger time). Quite a bit of time in ParserCompileScript and ParserCompileLazy too, for what it's worth.
Flags: needinfo?(jdemooij)
My latest try subjectively feels much faster. Profile: https://cleopatra.io/#report=063ee1b8c2aba1999ee7ad0694f34718674a74fa Summary: script: 52% wait: 28% CC: 4% Quite a few bailouts (268), though I didn't look at percentage of time spent in ion/baseline (the only way I know how to do that is to generate a tracelog.) GC was only 2%, so it does not appear to be an issue here. Unfortunately, my summarization script doesn't yet break down scripting engine (interpreter/ion/baseline/wasm) or parsing vs execution, but at a glance it still looks like there's a lot of parsing going on.
(In reply to Steve Fink [:sfink] [:s:] from comment #45) > http://people.mozilla.org/~sfink/data/tl-1269698.tar.xz is a tracelogger > archive of this run (including startup). Check out the tracelogger repo, > unpack this, run |python <tracelogger-dir>/website/server.py|, navigate to > http://localhost:8000/tracelogger.html. And then watch your browser melt; > I'm probably going to write something to trim the traces down to make them > more manageable. Not melting anymore! I fixed some of the reasons why this log was giving the browser a hard time. The experience should be better now.
(In reply to Steve Fink [:sfink] [:s:] from comment #46) > My latest try subjectively feels much faster. > > Profile: > https://cleopatra.io/#report=063ee1b8c2aba1999ee7ad0694f34718674a74fa > > Summary: > > script: 52% > wait: 28% > CC: 4% > > Quite a few bailouts (268), though I didn't look at percentage of time spent > in ion/baseline (the only way I know how to do that is to generate a > tracelog.) I asked the same question for another GDoc bug on IRC. At that time the conclusion was it can be complicated to measure how much time is "spent" on bailout, although we can get the additional re-compilation time from Traceloger. I thought the most tricky one is if the "cost" from a faster path (in Ion) to a slower path (in Baseline) should be counted, because it is somewhat like an opportunity cost (if at the beginning Ion guessed it correctly). And for another bug I tested that there were 40 ~ 60 bailouts even only with a "about:blank" page. According to that, I thought maybe we can optimise our self-hosted and other JS implemented features to eliminate such overhead. However, I don't know if it's do-able and if it's worth to reduce bailout as few as possible, partly because I don't know how to measure the impact correctly. > > GC was only 2%, so it does not appear to be an issue here. > > Unfortunately, my summarization script doesn't yet break down scripting > engine (interpreter/ion/baseline/wasm) or parsing vs execution, but at a > glance it still looks like there's a lot of parsing going on.
Whiteboard: [platform-rel-Google][platform-rel-GoogleDocs] → [platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs]
Summary: [Perf][google docs] 88.01%(7750 ms) slower than Chrome when opening 1 page UTF8 content → [perf][google suite][google docs] 88.01%(7750 ms) slower than Chrome when opening 1 page UTF8 content
plat-rel tracked at the meta level
platform-rel: + → -
Whiteboard: [platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs] → [qf:investigate][platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs]
Flags: needinfo?(jdemooij)
Keywords: perf
Nobody looked into these bugs for a while, moving to P2 and waiting for [qf] re-evaluation before moving back to P1.
Priority: P1 → P2
Whiteboard: [qf:investigate][platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs] → [qf][platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs]
Marking this QF:incomplete because there doesn't seem to be a consensus on a real problem here, even two years ago. It's really questionable whether this exists, or exists in the same way today.
Whiteboard: [qf][platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs] → [qf:incomplete][platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs]
QA Whiteboard: qa-not-actionable

This one isn't very actionable and Speedindex numbers for Google Docs look quite reasonable: https://i.ibb.co/k3yYDK0/image.png
And obviously the site has been updated many times since this was filed.

If there are cases which are still slow, better to open new bugs with steps-to-reproduce.

Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → INACTIVE
Performance Impact: --- → ?
Whiteboard: [qf:incomplete][platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs] → [platform-rel-Google][platform-rel-GoogleSuite][platform-rel-GoogleDocs]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: