Closed
Bug 958440
Opened 11 years ago
Closed 7 years ago
[l10n] use binary/compressed string instead of JSON format for inserting default locale strings into HTML
Categories
(Firefox OS Graveyard :: Gaia, defect)
Tracking
(blocking-b2g:-)
RESOLVED
WONTFIX
blocking-b2g | - |
People
(Reporter: jj.evelyn, Unassigned)
References
Details
Attachments
(2 files)
Since JSON format wastes memory, we should use more compressed format to initialize default locale strings.
Reporter | ||
Updated•11 years ago
|
Whiteboard: [tarako]
Comment 1•11 years ago
|
||
I don’t understand what you mean with a “binary” format for l10n data, which is pure text. What format are you thinking of?
The JSON format has been thought to be as lightweight as possible, it’s only slightly less compact than the *.properties format (just a few additional quote and underscore signs).
Updated•11 years ago
|
blocking-b2g: --- → 1.3?
Comment 2•11 years ago
|
||
(In reply to Fabien Cazenave [:kaze] from comment #1)
> I don’t understand what you mean with a “binary” format for l10n data, which
> is pure text. What format are you thinking of?
More like a "compressed" format than a "binary" format. JSON representation is nice and readable but the structure data takes a lot of memory in the JavaScript world (BEFORE and AFTER JSON.parse()) and a lot of space in disk too.
A nice, maybe drop-in solution of would be http://msgpack.org/, but I don't know if this will deliver actual performance or not. We should consider develop a data structure specifically for this, e.g. what I did in JSZhuyin IME [1] and/or our spell check libraries.
[1] https://github.com/timdream/jszhuyin/blob/master/lib/jszhuyin_data_pack.js
Comment 3•11 years ago
|
||
If you are looking for a memory-efficient in-memory key-value storage, maybe storage.js can work here too?
https://github.com/timdream/jszhuyin/blob/master/lib/storage.js
This will certainly be smaller than storing values in a JS object dictionary. Actually, Chinese IME only runs on the poor unagi after I move away from JS object(s) :).
Comment 4•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #3)
> https://github.com/timdream/jszhuyin/blob/master/lib/storage.js
On a second thought, storage.js is probably not good for this use case.
It was design to work with keys with limited length and a lot of repetition matched from the beginning, e.g.,
abc
ab
abd
abdx
our l10n ids doesn't look like this at all and the data structure will waste a lot of space in headers.
Comment 5•11 years ago
|
||
After further looking at the webapp-optimise.js, I realized we could only replace locales-obj/*.json files (GAIA_CONCAT_LOCALES) with Typed Array blobs, but we cannot replace <script type="application/l10n"> blocks in HTML (the GAIA_INLINE_LOCALES feature).
The way we feed and get data into |gL10nData| object needs major overhaul too.
I am not sure if we rally need GAIA_INLINE_LOCALES if we have GAIA_CONCAT_LOCALES, nor I am not sure if we could optimise GAIA_CONCAT_LOCALES further into typed array.
Reporter | ||
Comment 6•11 years ago
|
||
GAIA_CONCAT_LOCALES is firstly introduced by bug 853933
Comment 7•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #5)
> I am not sure if we rally need GAIA_INLINE_LOCALES if we have
> GAIA_CONCAT_LOCALES, nor I am not sure if we could optimise
> GAIA_CONCAT_LOCALES further into typed array.
s/GAIA_CONCAT_LOCALES/GAIA_INLINE_LOCALES/ the second one :-/
Reporter | ||
Updated•11 years ago
|
Assignee: nobody → ehung
Comment 8•11 years ago
|
||
Please be aware l10n.js will be soon retired with bug 914414 and replaced by l20n.js, so we should keep that part of the code change at the minimum.
Comment 9•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #8)
> Please be aware l10n.js will be soon retired with bug 914414 and replaced by
> l20n.js, so we should keep that part of the code change at the minimum.
Correction: bug 914414 is only about refactoring l10n.js; it's not about introducing l20n.js nor the l20n syntax into Gaia.
I synced with Stas last Friday and we are confident the two bugs can work in parallel without generate too much conflicts.
Reporter | ||
Comment 10•11 years ago
|
||
To reduce memory usage, we create L10nDataStore to convert original json data to a compressed typed array.
I also remove all embedded l10n data (in html) things in a separated commit ( 0169867).
Attachment #8362573 -
Flags: review?(timdream)
Attachment #8362573 -
Flags: review?(21)
Attachment #8362573 -
Flags: feedback?(stas)
Comment 11•11 years ago
|
||
Guess Yuren and I are curious, too.
I'd not bet money on l10n indices being ascii, fwiw.
Also, should we have the encoder in l10n.js? I'm curious if we need that at gaia runtime ever? Sounds more like /build/ stuff? But I'm guessing, I'm far from groking the code yet.
Comment 12•11 years ago
|
||
Can we have an idea of the memory saved with this patch?
Comment 13•11 years ago
|
||
Do you have numbers on how removing inline locale data (inline HTML json) impacts app startup time with non-default locale?
Comment 14•11 years ago
|
||
Evelyn, integration tests for build script was failed.
https://travis-ci.org/mozilla-b2g/gaia/jobs/17284506
Comment 15•11 years ago
|
||
(In reply to Axel Hecht [:Pike] from comment #11)
> Guess Yuren and I are curious, too.
>
> I'd not bet money on l10n indices being ascii, fwiw.
l10n IDs in properties will almost always be ASCII -- there is no reason for it not too.
If we happen to came across a non-ascii character, we could always encoded it in UTF-8 before store it into Uint8Array. Using Unit8Array give as the same space efficiently as UTF-8 here.
> Also, should we have the encoder in l10n.js? I'm curious if we need that at
> gaia runtime ever? Sounds more like /build/ stuff? But I'm guessing, I'm far
> from groking the code yet.
Yeah, you probably right. My original intend was to allow encoder and decoder receive the same API. This could be done in a follow-up.
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #12)
> Can we have an idea of the memory saved with this patch?
(In reply to Zbigniew Braniecki [:gandalf] from comment #13)
> Do you have numbers on how removing inline locale data (inline HTML json)
> impacts app startup time with non-default locale?
Evelyn told me offline the performance gain is little since there isn't that much of l10n data per app, but I believe the patch here unlock us for making more improvements later on. I am asking her to post a measurement result here.
Comment 16•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #15)
> (In reply to Axel Hecht [:Pike] from comment #11)
> If we happen to came across a non-ascii character, we could always encoded
> it in UTF-8 before store it into Uint8Array. Using Unit8Array give as the
> same space efficiently as UTF-8 here.
s/efficiently/efficiency/
FYI this is as simple as unescape(encodeURIComponent(string))
Comment 17•11 years ago
|
||
> (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #12)
> > Can we have an idea of the memory saved with this patch?
>
> (In reply to Zbigniew Braniecki [:gandalf] from comment #13)
> > Do you have numbers on how removing inline locale data (inline HTML json)
> > impacts app startup time with non-default locale?
>
> Evelyn told me offline the performance gain is little since there isn't that
> much of l10n data per app, but I believe the patch here unlock us for making
> more improvements later on. I am asking her to post a measurement result
> here.
Thanks with number it will be easier to take a decision :)
Comment 18•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #5)
> I am not sure if we rally need GAIA_INLINE_LOCALES if we have
> GAIA_CONCAT_LOCALES, nor I am not sure if we could optimise
> GAIA_CONCAT_LOCALES further into typed array.
They serve different purposes:
- GAIA_CONCAT_LOCALES groups multiple properties files into one (JSON) file to avoid multiple file IO requests and the need to parse properties files,
- GAIA_INLINE_LOCALES takes a small subset of that JSON (with required dependencies for string interpolation) and inserts it into the HTML files to provide a fast way to translate the visible DOM into non-default locales without having to wait for the proper JSON to be XHR'ed. This was done to avoid FOUCs.
I'd happily remove GAIA_INLINE_LOCALES if they don't regress the launch experience for non-default locales. In fact, I just tested Evelyn's patch on a Keon, and I didn't notice any FUOCs at all, so maybe other parts of Gaia are fast enough to get rid of inlining at last.
I'd suggest to tackle this in a separate bug. Evelyn, could you abstract the commit 0169867 into a new branch and a new patch? I feel like these two things should be addressed separately.
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #12)
> Can we have an idea of the memory saved with this patch?
I did a quick test using the Contacts app which have been giving us a lot of trouble in bug 914414. Evelyn's patch seems to save around 230 KB of memory pre-GC as compared to today's master; it uses 120 KB more after GC.
Evelyn, could we use https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder instead of String.prototype.getBytes in the patch?
Reporter | ||
Comment 19•11 years ago
|
||
Quick update: Sorry I didn't catch up here because I'm trying to fix travis error. I will address your feedback soon. :)
Comment 20•11 years ago
|
||
Comment on attachment 8362573 [details]
point to https://github.com/mozilla-b2g/gaia/pull/15534
I've already leave many review comments on Github and I think we are on the right track :) Let's work toward getting this landed!
Attachment #8362573 -
Flags: review?(timdream) → feedback+
Comment 21•11 years ago
|
||
I don't think this is a good idea to remove GAIA_INLINE_LOCALE anyway :)
Comment 22•11 years ago
|
||
(In reply to Staś Małolepszy :stas from comment #18)
> (In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from
> comment #5)
> > I am not sure if we rally need GAIA_INLINE_LOCALES if we have
> > GAIA_CONCAT_LOCALES, nor I am not sure if we could optimise
> > GAIA_CONCAT_LOCALES further into typed array.
>
> They serve different purposes:
>
> - GAIA_CONCAT_LOCALES groups multiple properties files into one (JSON)
> file to avoid multiple file IO requests and the need to parse properties
> files,
> - GAIA_INLINE_LOCALES takes a small subset of that JSON (with required
> dependencies for string interpolation) and inserts it into the HTML files to
> provide a fast way to translate the visible DOM into non-default locales
> without having to wait for the proper JSON to be XHR'ed. This was done to
> avoid FOUCs.
>
> I'd happily remove GAIA_INLINE_LOCALES if they don't regress the launch
> experience for non-default locales. In fact, I just tested Evelyn's patch
> on a Keon, and I didn't notice any FUOCs at all, so maybe other parts of
> Gaia are fast enough to get rid of inlining at last.
>
> I'd suggest to tackle this in a separate bug. Evelyn, could you abstract
> the commit 0169867 into a new branch and a new patch? I feel like these two
> things should be addressed separately.
>
> (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #12)
> > Can we have an idea of the memory saved with this patch?
>
> I did a quick test using the Contacts app which have been giving us a lot of
> trouble in bug 914414. Evelyn's patch seems to save around 230 KB of memory
> pre-GC as compared to today's master; it uses 120 KB more after GC.
>
That's a surprising result. Could it be that we keep both blobData and jsonData in memory ? (I have not looked at the patch yet).
It would be good to have some results for other apps too. There is normally a tool to check the memory now with test-perf. It is pretty new, so I'm unsure about how reliable it is, and I doubt that it is doing 2 measurements like pre-gc/after-gc but nothing prevent to fix that ;)
Comment 23•11 years ago
|
||
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #21)
> I don't think this is a good idea to remove GAIA_INLINE_LOCALE anyway :)
And the reason being?
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #22)
> (In reply to Staś Małolepszy :stas from comment #18)
> > (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #12)
> > > Can we have an idea of the memory saved with this patch?
> >
> > I did a quick test using the Contacts app which have been giving us a lot of
> > trouble in bug 914414. Evelyn's patch seems to save around 230 KB of memory
> > pre-GC as compared to today's master; it uses 120 KB more after GC.
> >
>
> That's a surprising result. Could it be that we keep both blobData and
> jsonData in memory ? (I have not looked at the patch yet).
>
jsonData is being constructed only on build-time -- they will never co-exist.
the l10nData.get() function gets the string directly from blobData on runtime, instead of decompress the blobData into the original json structure (which defeats the purpose of doing so).
(Axel is right -- we should have consider separate encoder/decoder to avoid this confusion)
I am puzzled by the fact this patch saves more memory pre-GC but takes more after GC. I would expect both figures to go down (not so much, but definitely downwards).
Maybe the fact that we save English strings in Uint16 contributes to that; I wouldn't be surprised if there are some dark magic inside JS engine that would save ascii strings in 8-bits -- such feature will be very useful back in the days when people storing bytes in JS strings :)
> It would be good to have some results for other apps too. There is normally
> a tool to check the memory now with test-perf. It is pretty new, so I'm
> unsure about how reliable it is, and I doubt that it is doing 2 measurements
> like pre-gc/after-gc but nothing prevent to fix that ;)
Comment 24•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #23)
> (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #21)
> > I don't think this is a good idea to remove GAIA_INLINE_LOCALE anyway :)
>
> And the reason being?
hm. Maybe I messed up my understanding between the default locale which is inlined in the document and GAIA_INLINE_LOCALE which inline a json subset.
One thing with the removal of GAIA_INLINE_LOCALE is to check if it does not slow down app startup by enforcing a xhr at startup.
> I am puzzled by the fact this patch saves more memory pre-GC but takes more
> after GC. I would expect both figures to go down (not so much, but
> definitely downwards).
>
> Maybe the fact that we save English strings in Uint16 contributes to that; I
> wouldn't be surprised if there are some dark magic inside JS engine that
> would save ascii strings in 8-bits -- such feature will be very useful back
> in the days when people storing bytes in JS strings :)
>
Could be. nbp do you know ?
Flags: needinfo?(nicolas.b.pierron)
Updated•11 years ago
|
blocking-b2g: 1.3? → 1.3T?
Comment 25•11 years ago
|
||
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #24)
> One thing with the removal of GAIA_INLINE_LOCALE is to check if it does not
> slow down app startup by enforcing a xhr at startup.
Yeah, I'd be very interested in confirming my numbers that inline_locale does not impact performance on non-default locales. Can we fork it into a separate bug?
Comment 26•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #23)
> I am puzzled by the fact this patch saves more memory pre-GC but takes more
> after GC. I would expect both figures to go down (not so much, but
> definitely downwards).
pre-GC does not mean much.
after-GC is the important one, as long as you run the GC at the same idle point in both cases.
We do not yet have any good way to plot the memory usage as-is, as it does not make much sense either in a GC'ed language.
> Maybe the fact that we save English strings in Uint16 contributes to that; I
> wouldn't be surprised if there are some dark magic inside JS engine that
> would save ascii strings in 8-bits -- such feature will be very useful back
> in the days when people storing bytes in JS strings :)
Not yet. But we still have this issue for base64 strings such as the home screen image (unless it got fixed?)
Flags: needinfo?(nicolas.b.pierron)
Comment 27•11 years ago
|
||
(In reply to Nicolas B. Pierron [:nbp] from comment #26)
> (In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from
> comment #23)
> > I am puzzled by the fact this patch saves more memory pre-GC but takes more
> > after GC. I would expect both figures to go down (not so much, but
> > definitely downwards).
>
> pre-GC does not mean much.
> after-GC is the important one, as long as you run the GC at the same idle
> point in both cases.
>
pre-GC is important too in the sense that it is the peak of memory that can be consumed by the app and may result into other apps to be killed. But for 280k it is OK I guess.
Reporter | ||
Comment 28•11 years ago
|
||
(In reply to Staś Małolepszy :stas from comment #18)
> ...
> I'd happily remove GAIA_INLINE_LOCALES if they don't regress the launch
> experience for non-default locales. In fact, I just tested Evelyn's patch
> on a Keon, and I didn't notice any FUOCs at all, so maybe other parts of
> Gaia are fast enough to get rid of inlining at last.
>
I'm generating a report and will paste here for a evaluation.
> I'd suggest to tackle this in a separate bug. Evelyn, could you abstract
> the commit 0169867 into a new branch and a new patch? I feel like these two
> things should be addressed separately.
>
I feel it's not appropriate to move this to a separate bug, because:
1. I still have to turn off GAIA_INLINE_LOCALES in build script, otherwise those inline json will break my logic of accessing blob data. As Tim said in comment 23, jsonData and blobData should never co-exist. and it also breaks l10n_test.js in Gallery app (yes, it's in Gallery, see bug 841422) since there are a few cases for testing inline behavior. Seems not a good idea to remove test cases but keep the codes to be test. (I really don't want to add more tricks on the l10n framework, it's already fragile.)
> (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #12)
> > Can we have an idea of the memory saved with this patch?
>
> Evelyn, could we use
> https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder instead of
> String.prototype.getBytes in the patch?
Can you explain the benefit of using TextEncoder? (BTW, I modified `String.prototype.getBytes` to be `L10nDataStore.prototype._stringToArray` per Tim's comment on Github)
Reporter | ||
Comment 29•11 years ago
|
||
Note on the *magic* I found in the whole l10n framework:
l10n.js will be used both in build time and in run time. It use XMLHttpRequest to get data from files but the XMLHttpRequest object is actually different in build time, in run time, and even in test case. So be careful if you are modifying event hooks or returned data type of XMLHttpRequest. The XMLHttpRequest object in build script and in test case are faked, you should make sure they all support events and data types you want. (Because of this, I strongly feel we should separate encode/decode scripts, some code lines will be duplicated though, bug 963436 filed)
Comment 30•11 years ago
|
||
(In reply to Evelyn Hung [:evelyn] from comment #29)
> Note on the *magic* I found in the whole l10n framework:
>
> l10n.js will be used both in build time and in run time. It use
> XMLHttpRequest to get data from files but the XMLHttpRequest object is
> actually different in build time, in run time, and even in test case. So be
> careful if you are modifying event hooks or returned data type of
> XMLHttpRequest. The XMLHttpRequest object in build script and in test case
> are faked, you should make sure they all support events and data types you
> want. (Because of this, I strongly feel we should separate encode/decode
> scripts, some code lines will be duplicated though, bug 963436 filed)
Yeah, we hit it several times during refactor. We'll want to probably separate BTO l10n script from runtime in the future. That should help as well.
Comment 31•11 years ago
|
||
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #27)
> (In reply to Nicolas B. Pierron [:nbp] from comment #26)
> > (In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from
> > comment #23)
> > > I am puzzled by the fact this patch saves more memory pre-GC but takes more
> > > after GC. I would expect both figures to go down (not so much, but
> > > definitely downwards).
> >
> > pre-GC does not mean much.
> > after-GC is the important one, as long as you run the GC at the same idle
> > point in both cases.
> >
>
> pre-GC is important too in the sense that it is the peak of memory that can
> be consumed by the app and may result into other apps to be killed. But for
> 280k it is OK I guess.
This is not a peak memory as it does not return the maximum of memory used, but just the memory used at the time of the snapshot. So it cannot blindly be used for comparisons. Do we have any way to capture the peak memory? I am not aware of any which might not assume the virtual allocation as being commited (such as valgrind).
Flags: needinfo?(n.nethercote)
Reporter | ||
Comment 32•11 years ago
|
||
(In reply to Evelyn Hung [:evelyn] from comment #28)
> (In reply to Staś Małolepszy :stas from comment #18)
> > ...
> > I'd happily remove GAIA_INLINE_LOCALES if they don't regress the launch
> > experience for non-default locales. In fact, I just tested Evelyn's patch
> > on a Keon, and I didn't notice any FUOCs at all, so maybe other parts of
> > Gaia are fast enough to get rid of inlining at last.
> >
> I'm generating a report and will paste here for a evaluation.
Well, I was misunderstanding what the test-perf report is for. We don't want it for evaluating if we can remove GAIA_INLINE_LOCALES, we just need to make sure FOUCs won't happen. Unfortunately, I see FOUCs on Contacts, Message, and Dialer significantly after manually testing more. So I will try to keep GAIA_INLINE_LOCALES, and let it work with my patch... :(
That means we cache inline data in a json to avoid FOUCs, but access the rest of strings from blob.
To get rid of inline stuff, we may need an extra 'localized' event or an event flag to let apps know if it's the first localized event, so apps can update their DOM element according to. (show default hidden elements or re-query data)
Reporter | ||
Comment 33•11 years ago
|
||
(In reply to Evelyn Hung [:evelyn] from comment #32)
> (In reply to Evelyn Hung [:evelyn] from comment #28)
> > (In reply to Staś Małolepszy :stas from comment #18)
> > > ...
> > > I'd happily remove GAIA_INLINE_LOCALES if they don't regress the launch
> > > experience for non-default locales. In fact, I just tested Evelyn's patch
> > > on a Keon, and I didn't notice any FUOCs at all, so maybe other parts of
> > > Gaia are fast enough to get rid of inlining at last.
> > >
> > I'm generating a report and will paste here for a evaluation.
>
> Well, I was misunderstanding what the test-perf report is for. We don't want
> it for evaluating if we can remove GAIA_INLINE_LOCALES
But we need a memory report to see if my patch really decreases memory usage. To be honestly, I don't think it costs less memory since my generated data may bigger than original json file. (especially in English, because of utf8 vs. utf16). Anyway, I'm still figuring out how to come up a fair memory report. :X
Comment 34•11 years ago
|
||
(In reply to Evelyn Hung [:evelyn] from comment #32)
> (In reply to Evelyn Hung [:evelyn] from comment #28)
> > (In reply to Staś Małolepszy :stas from comment #18)
> > > ...
> > > I'd happily remove GAIA_INLINE_LOCALES if they don't regress the launch
> > > experience for non-default locales. In fact, I just tested Evelyn's patch
> > > on a Keon, and I didn't notice any FUOCs at all, so maybe other parts of
> > > Gaia are fast enough to get rid of inlining at last.
> > >
> > I'm generating a report and will paste here for a evaluation.
>
> Well, I was misunderstanding what the test-perf report is for. We don't want
> it for evaluating if we can remove GAIA_INLINE_LOCALES, we just need to make
> sure FOUCs won't happen. Unfortunately, I see FOUCs on Contacts, Message,
> and Dialer significantly after manually testing more. So I will try to keep
> GAIA_INLINE_LOCALES, and let it work with my patch... :(
> That means we cache inline data in a json to avoid FOUCs, but access the
> rest of strings from blob.
>
In fact we have thought of inlining everything in bug 923492. It will be an issue for Tarako where there is only 100mb of rom. To reduce the extra cost we need to use something else than the zip algorithm we used today since it compress on a per file basis which is not very smart in our case.
> To get rid of inline stuff, we may need an extra 'localized' event or an
> event flag to let apps know if it's the first localized event, so apps can
> update their DOM element according to. (show default hidden elements or
> re-query data)
Showing by default hidden element just because we wait for localization seems like a hack to me.
Reporter | ||
Comment 35•11 years ago
|
||
Comment on attachment 8362573 [details]
point to https://github.com/mozilla-b2g/gaia/pull/15534
1. add inline stuff back, we still need them
2. review comment addressed
3. travis error fixed (the marionette_js test seems not my fault? I got the same error on master in my local)
Attachment #8362573 -
Flags: review?(timdream)
Comment 36•11 years ago
|
||
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #34)
> > To get rid of inline stuff, we may need an extra 'localized' event or an
> > event flag to let apps know if it's the first localized event, so apps can
> > update their DOM element according to. (show default hidden elements or
> > re-query data)
>
> Showing by default hidden element just because we wait for localization
> seems like a hack to me.
Well, on the other hand we do plan to introduce exactly that on the Gecko level - similarly to how we treat CSS now, where we only paint after document is localized.
But doing the same on a shim lib level may cost us startup perf in which case I don't think it'll fly.
Reporter | ||
Comment 37•11 years ago
|
||
(In reply to Evelyn Hung [:evelyn] from comment #28)
> (In reply to Staś Małolepszy :stas from comment #18)
> > I'd suggest to tackle this in a separate bug. Evelyn, could you abstract
> > the commit 0169867 into a new branch and a new patch? I feel like these two
> > things should be addressed separately.
> >
>
> I feel it's not appropriate to move this to a separate bug, because:
> 1. I still have to turn off GAIA_INLINE_LOCALES in build script, otherwise
> those inline json will break my logic of accessing blob data. As Tim said in
> comment 23, jsonData and blobData should never co-exist. (...)
In my adding-inline-back approach still not make them co-exist. DOM translation happens synchronously, I throw embedded jsonData away after translation done.
Comment 38•11 years ago
|
||
Comment on attachment 8362573 [details]
point to https://github.com/mozilla-b2g/gaia/pull/15534
Looks good.
Attachment #8362573 -
Flags: review?(timdream) → review+
Comment 39•11 years ago
|
||
(In reply to Zbigniew Braniecki [:gandalf] from comment #36)
> (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #34)
> > > To get rid of inline stuff, we may need an extra 'localized' event or an
> > > event flag to let apps know if it's the first localized event, so apps can
> > > update their DOM element according to. (show default hidden elements or
> > > re-query data)
> >
> > Showing by default hidden element just because we wait for localization
> > seems like a hack to me.
>
> Well, on the other hand we do plan to introduce exactly that on the Gecko
> level - similarly to how we treat CSS now, where we only paint after
> document is localized.
>
> But doing the same on a shim lib level may cost us startup perf in which
> case I don't think it'll fly.
I would not recommend adding anything to Gecko as they are feature specific to packaged apps and it will not be standardized in feasible feature, i.e. makes it hard to understand from perspective of the Open Web accessed with URLs.
A lot of web pages nowadays comes with it's own loading screens. On the library level it's perfectly fine to implement APIs to give users an opportunity to explicitly implement a loading screen if they want to. Trying to solve it with INLINE or anything else sounds more like a hack to me than that.
Comment 40•11 years ago
|
||
> This is not a peak memory as it does not return the maximum of memory used,
> but just the memory used at the time of the snapshot. So it cannot blindly
> be used for comparisons. Do we have any way to capture the peak memory?
A while back I worked on this in bug 842800, but I didn't finish it and it's now moribund.
More usefully, here's a script I often use. You pass it a pid. It measures the RSS of that process 10 times a second, and it also records the peak RSS. Use ctrl-C to stop it.
#! /usr/bin/python
from __future__ import print_function
import subprocess
import sys
import time
if (len(sys.argv)) != 2:
print("usage:", sys.argv[0], "<pid>")
sys.exit(1)
pid = sys.argv[1]
peak_rss = 0
try:
while True:
cmd = ["ps", "--no-headers", "-o", "rss", "--pid=" + pid];
rss = int(subprocess.check_output(cmd).rstrip())
print("{0:.2f}".format(rss / 1024.0))
if (rss > peak_rss):
peak_rss = rss
time.sleep(0.10)
except:
print("\npeak: {0:.2f}".format(peak_rss / 1024.0))
Flags: needinfo?(n.nethercote)
Comment 41•11 years ago
|
||
And you can get peak RSS on Linux more directly from /proc/<pid>/status -- look for the "VmHWM" measurement (see http://man7.org/linux/man-pages/man5/proc.5.html for details).
Comment 42•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #39)
> I would not recommend adding anything to Gecko as they are feature specific
> to packaged apps and it will not be standardized in feasible feature, i.e.
> makes it hard to understand from perspective of the Open Web accessed with
> URLs.
What we're working on with multilingual HTML documents is targeting Open Web accessed with URLs. It just separates HTML from in-language content.
We need similar thing for FxOS, so we may want to try to build it in a forward compatible way.
Reporter | ||
Comment 43•11 years ago
|
||
I ran |make test-perf| and got these apps' launch time and memory usage. (it's weird some apps failed and throw errors. One full result file attached.)
calendar
launch time: (master) 2526ms (958440) 1920.4ms
memory: uss pss rss vsize
- master: 16.6 19.8 32.2 84.4
- 958440: 16.9 19.1 30.7 76.9
camera
launch time: (master) 715.8ms (958440) 774.6ms
memory: uss pss rss vsize
- master: 17.1 20.4 32.8 86.7
- 958440: 19.9 22.6 32.9 86.7
clock
launch time: (master) 3938ms (958440) 3147.4ms
memory: uss pss rss vsize
- master: 19 22.1 34.2 82.7
- 958440: 21.3 23.9 34.3 107.5
communications/contacts
launch time: (master) 5056ms (958440) 4115.4ms
memory: uss pss rss vsize
- master: 20.4 24.1 35 91.2
- 958440: 22.1 25.4 35.1 82.8
communications/dialer
launch time: (master) 5799ms (958440) 4881.4ms
memory: uss pss rss vsize
- master: 17.2 21.9 31.9 92.4
- 958440: 18.2 21.5 31.5 81.6
communications/ftu
launch time: (master) 8957.4ms (958440) 7809.6ms
memory: uss pss rss vsize
- master: 21.9 27 37.3 89.4
- 958440: 22.2 26.9 36.4 82.1
Reporter | ||
Comment 44•11 years ago
|
||
The test-perf report is a reference for taking a look of memory snapshot, and obviously it looks not improve anything. However, the patch might be helpful on memory usage with bug 945152, so I will measure again after bug 945152 landed.
Comment 45•11 years ago
|
||
(In reply to Zbigniew Braniecki [:gandalf] from comment #36)
> (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #34)
> > > To get rid of inline stuff, we may need an extra 'localized' event or an
> > > event flag to let apps know if it's the first localized event, so apps can
> > > update their DOM element according to. (show default hidden elements or
> > > re-query data)
> >
> > Showing by default hidden element just because we wait for localization
> > seems like a hack to me.
>
> Well, on the other hand we do plan to introduce exactly that on the Gecko
> level - similarly to how we treat CSS now, where we only paint after
> document is localized.
I kind of suspect that at the Gecko level we will start to show something on the first reflow, not until everything is loaded, and then we will build the page with locales directly, instead of building the dom and then add the locales. Or did I misunderstood something and there is a bug in the pipe that says something different ?
Comment 46•11 years ago
|
||
Comment on attachment 8362573 [details]
point to https://github.com/mozilla-b2g/gaia/pull/15534
Launch time seems weird for the attached values but also seems slightly better while USS seems to have regressed a little bit.
Can you re-ask for review once you know what happens to the USS ?
Attachment #8362573 -
Flags: review?(21)
Comment 47•11 years ago
|
||
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #45)
> I kind of suspect that at the Gecko level we will start to show something on
> the first reflow, not until everything is loaded, and then we will build the
> page with locales directly, instead of building the dom and then add the
> locales. Or did I misunderstood something and there is a bug in the pipe
> that says something different ?
In the ideal world, we would wait for localization to firstPaint. At least that was the original idea that layout/dom/l10n team decided to try.
Pike - am I right here?
Flags: needinfo?(l10n)
Comment 48•11 years ago
|
||
I think in a perfect world we'll have the perfect balance between DOM, styling, and l10n shadow content during incremental reflow.
And now I get the oscar for best fantasy short film.
Flags: needinfo?(l10n)
Comment 49•11 years ago
|
||
(In reply to Zbigniew Braniecki [:gandalf] from comment #42)
> (In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from
> comment #39)
>
> > I would not recommend adding anything to Gecko as they are feature specific
> > to packaged apps and it will not be standardized in feasible feature, i.e.
> > makes it hard to understand from perspective of the Open Web accessed with
> > URLs.
>
> What we're working on with multilingual HTML documents is targeting Open Web
> accessed with URLs. It just separates HTML from in-language content.
>
> We need similar thing for FxOS, so we may want to try to build it in a
> forward compatible way.
There is already content negotiation in HTTP. You must have something better than that in mind to say you have a better solution for the Open Web along with the packaged app use cases. Let's move the discussion somewhere else (is there a bug? a thread?)
https://en.wikipedia.org/wiki/Content_negotiation
Comment 50•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #49)
> There is already content negotiation in HTTP. You must have something better
> than that in mind to say you have a better solution for the Open Web along
> with the packaged app use cases. Let's move the discussion somewhere else
> (is there a bug? a thread?)
We're talking about client-side.
No, we don't have bugs filed yet. Once we start working on a draft proposal I'll file bugs.
Comment 51•11 years ago
|
||
triage: 1.3T+ so we can have auto-correction
blocking-b2g: 1.3T? → 1.3T+
Whiteboard: [tarako]
Reporter | ||
Comment 52•11 years ago
|
||
(In reply to Joe Cheng [:jcheng] from comment #51)
> triage: 1.3T+ so we can have auto-correction
correct Joe's comment: The patch in this bug needs to work with the patch in bug 945152, so we may have chance to decrease memory. Also, auto-correction feature may work more smoothly on bug 945152's patch. We are evaluating how much benefit we can get, so we can make a decision if we pick all of them.
Reporter | ||
Comment 53•11 years ago
|
||
Test the patch here with patches in bug 945152 and bug 957497, but I can't tell anything better... I need someone to further evaluate the result. Will ping :shianyow to see if any tool can produce a fair report.
Updated•11 years ago
|
Flags: needinfo?(swu)
Comment 54•11 years ago
|
||
The earlier patch supports mapped array buffer, however it has problem handling the case when the mapped array buffer been cloned in XHR worker of Gecko, which still consumes heap memory.
The problem has been fixed in new patch in bug 945152 and under review.
Flags: needinfo?(swu)
Comment 55•11 years ago
|
||
Give the fact the memory benefit is small and we will risk of redoing/conflicting with patch in bug 914414, make this bug 1.3? for triage to reconsider.
blocking-b2g: 1.3T+ → 1.3T?
Comment 56•11 years ago
|
||
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #55)
> Give the fact the memory benefit is small and we will risk of
> redoing/conflicting with patch in bug 914414, make this bug 1.3? for triage
> to reconsider.
Agreeand hope bug 945152 helps here when it lands
blocking-b2g: 1.3T? → -
Comment 57•11 years ago
|
||
I am removing the assignee since Evelyn is no longer actively working on this. Also setting the dependency just to ensure things gets done in the right order.
Assignee: ehung → nobody
Reporter | ||
Updated•11 years ago
|
Attachment #8362573 -
Flags: feedback?(stas)
Comment 58•7 years ago
|
||
Firefox OS is not being worked on
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•