Closed Bug 1333980 Opened 8 years ago Closed 7 years ago

Land L10nRegistry

Categories

(Core :: Internationalization, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla57
Tracking Status
firefox57 --- fixed

People

(Reporter: zbraniecki, Assigned: zbraniecki)

References

(Blocks 3 open bugs)

Details

Attachments

(1 file)

While working on the new localization framework for Gecko we were asked not to use ChromeRegistry. In bug 1280671 we designed a new type of registry specifically for localization resources - L10nRegistry. Since then it went through several iterations, and in it's current form it lives here: https://github.com/zbraniecki/L10nRegistry I'd like to bind it with the new Intl::LocaleService (bug 1332207) and start using for L20n in Fennec.
And we'll need bug 1331092 to make it async.
Depends on: 1331092
Depends on: 1332207
For the record - the relationship between the four new APIs - OSPreferences, LocaleService, L10nRegistry and mozIntl is as follows: - OSPreferences is an API for retrieving OS preferences. - LocaleService is an API responsible for negotiating locales used by the application - L10nRegistry is an API responsible for managing l10n resources - mozIntl is an API that wraps JS Intl API + ICU for front end chrome code to use with use of OSPreferences and LocaleService The important notes: - that's a change from how the code is structured currently. In particular, ChromeRegistry manages app languages, not LocaleService. We want to increase the separation of concerns. L10nRegistry will not decide on languages, it will only maintain locale resources, cache them, retrieve them, and maybe later install/remove them. - LocaleService may, in time, use the notion of 'contexts' to develop separate language negotiation strategies for different abstract contexts. An example is Firefox in 'de', with devtools in 'en-US'. - everywhere in Gecko and Firefox, we should end up only retrieving one list of locales - LocaleService::GetAppLocales and expect that the first of them is the one the application is currently translated to - pseudo-locales will be implemented within this system and will work transparently from the perspective of callsites. - all 4 will rely on ICU. We do not expect to have non-ICU versions. - In the end all 4 will be used by the new localization framework and until we migrate to it completely (end of 2017) we will need to be careful to make sure that old callsites remain compatible with the new ones.
Can you explain more about why this needs bug 1331092? Does it effectively return a stream of values that you'd like to iterate over? If so, actually returning a ReadableStream might be better, which would make bug 1128959 the better dependency. To be clear, consuming a streaming async source is much nicer with async iterators, but they should be a convenience on the client side, not a blocker for introducing the producing object.
Yeah, so, L10nRegistry has a fairly sophisticated approach to language resource fallbacking. It takes multiple sources (think, "PlatformResources", "BrowserResources", "Addon1Resources") and multiple files (think, "file1", "file2", "file3") and creates permutations of all combinations of those sources and files: [ [ {file: "file1", source: "PlatformResources"}, {file: "file2", source: "PlatformResources"}, {file: "file3", source: "PlatformResources"}, ], [ {file: "file1", source: "PlatformResources"}, {file: "file2", source: "PlatformResources"}, {file: "file3", source: "BrowserResources"}, ], [ {file: "file1", source: "PlatformResources"}, {file: "file2", source: "BrowserResources"}, {file: "file3", source: "PlatformResources"}, ], [ {file: "file1", source: "PlatformResources"}, {file: "file2", source: "BrowserResources"}, {file: "file3", source: "BrowserResources"}, ], ... ] that of course generates a pretty sizable array when you get to more sources and more files to load. But the good news is that we don't really need all of those combinations - we just need the first one that fully resolves, and then, maybe, later, we'll need the second that resolves. So we use a generator to yield bundles, and that means that the caller of the API gets just the first bundle, can attempt to use it, and if that doesn't work, takes the next from the generator and so on. The challenge here is that majority of those combinations will not resolve - if we attempt to load "file1" from "PlatformResources" and learn that it's not there, we can give up on returning any combination with this file/source. So we do that - we have a cache that we fill with blacklisted file/source combos which we know won't resolve. That in return, shrinks the result from the generator really nicely. Now, the final challenge, is that we'd like to do the I/O asynchronously. And here comes async iterators :) They tie it nicely - the first combo is asynchronously fetched - if it resolves, we return it from the generator, if it doesn't we cache what we got and attempt the next one. In the end, the generator returns only fully resolved combos and the caller can just use them and if they'll need the next just ask for it. It gives a really nice API that hides a lot of complexity, so I'd like to shoot for it. We may land sync version first and experiment with the async once async iterators land. Does it answer your question?
> Does it answer your question? It does, to some extent. What you're describing still sounds like a perfect use case for streams, though, including the filtering which would be a transform stream. Then again, while I hope to land ReadableStream this cycle, transform streams will take a while longer, so it probably doesn't make sense to make this rely on them. As a fallback, async iterators are a pretty good solution, I agree.
Depends on: 1335517
Depends on: 1309341
Depends on: 1337694
Depends on: 1347002
Depends on: 1347306
No longer blocks: 1280688
No longer depends on: 1309341
Depends on: 1344355
Assignee: nobody → gandalf
Status: NEW → ASSIGNED
Depends on: 1352312
No longer blocks: 1349401
Depends on: 1348042
No longer depends on: 1344355
Depends on: 1347801
Comment on attachment 8844352 [details] Bug 1333980 - Introduce L10nRegistry.jsm. This is basically ready in the sync mode. Once bug 1352312 is resolved, we can easily switch it to async if performance won't be affected.
Attachment #8844352 - Flags: review?(dtownsend) → feedback?(dtownsend)
Comment on attachment 8844352 [details] Bug 1333980 - Introduce L10nRegistry.jsm. https://reviewboard.mozilla.org/r/117840/#review144268 I think I'm still failing to understand how it will be easy to convert this API to async when available and I'd like to look at it again when the issues listed have been addressed. ::: intl/l10n/L10nRegistry.jsm:41 (Diff revision 6) > + * ]); > + * > + * the generator will return an iterator over the following contexts: > + * > + * { > + * locale: 'pl', Where did pl come from? ::: intl/l10n/L10nRegistry.jsm:44 (Diff revision 6) > + * > + * { > + * locale: 'pl', > + * resources: [ > + * ['app', '/browser/menu.ftl'], > + * ['platform', '/platform/toolkit.ftl'], Why 'platform'? ::: intl/l10n/L10nRegistry.jsm:79 (Diff revision 6) > + * at the top. > + */ > + > +const L10nRegistry = { > + sources: new Map(), > + sourcesOrder: [], Maps iterate over their contents in insertion order so I think this is just sources.keys(). ::: intl/l10n/L10nRegistry.jsm:88 (Diff revision 6) > + yield * generateContextsForLocale(loc, L10nRegistry.sourcesOrder, resIds); > + } > + }, > + > + registerSource(source) { > + if (L10nRegistry.sources.has(source.name)) { Why L10nRegistry rather than this? ::: intl/l10n/L10nRegistry.jsm:107 (Diff revision 6) > + > + getAvailableLocales() { > + const locales = new Set(); > + > + for (const name of L10nRegistry.sourcesOrder) { > + const source = L10nRegistry.sources.get(name); You can simplify these two lines to: for (const source of L10nRegistry.sources.values()) ::: intl/l10n/L10nRegistry.jsm:122 (Diff revision 6) > + L10nRegistry.sourcesOrder.length = 0; > + }, > +}; > + > +// This helper function generates contexts for a given locale. > +function* generateContextsForLocale(loc, sourcesOrder, resIds, resolvedOrder = []) { This function could do with a lot more explanation in it. It isn't clear to me why it is recursive ::: intl/l10n/L10nRegistry.jsm:182 (Diff revision 6) > + return false; > + } > + > + const fullPath = this.getPath(locale, path); > + if (!this.cache.hasOwnProperty(fullPath)) { > + return; Return what, undefined? ::: intl/l10n/L10nRegistry.jsm:210 (Diff revision 6) > + } > + return this.cache[fullPath]; > + } > +} > + > +class IndexedFileSource { How does this differ from above? When is it used? Can it extend the above? ::: intl/l10n/L10nRegistry.jsm:250 (Diff revision 6) > + return null; > + } > + } > +} > + > +L10nRegistry.load = function(url) { Why is this defined down here? Seems like it could be a standalone function.
Attachment #8844352 - Flags: feedback?(dtownsend)
Updated to the feedback and requesting review. Landing this will enable us to more easily bind AddonsManager (bug 1365709), Fennec DLC (bug 1347802) and our new Localization API (bug 1347800) into it.
Comment on attachment 8844352 [details] Bug 1333980 - Introduce L10nRegistry.jsm. https://reviewboard.mozilla.org/r/117840/#review162188 ::: intl/l10n/L10nRegistry.jsm:82 (Diff revision 7) > + > +const L10nRegistry = { > + sources: new Map(), > + ctxCache: new Map(), > + > + * generateContexts(requestedLangs, resIds) { Please add javadoc for this function. Can we rename resIds to resourceIds or whatever it actually stands for throughout? ::: intl/l10n/L10nRegistry.jsm:82 (Diff revision 7) > + > +const L10nRegistry = { > + sources: new Map(), > + ctxCache: new Map(), > + > + * generateContexts(requestedLangs, resIds) { So this returns an iterator over promises that the caller has to await on individually to get each context. What benefit is there to that over just returning a single promise that resolves to an array of contexts? ::: intl/l10n/L10nRegistry.jsm:84 (Diff revision 7) > + sources: new Map(), > + ctxCache: new Map(), > + > + * generateContexts(requestedLangs, resIds) { > + const sourcesOrder = Array.from(this.sources.keys()).reverse(); > + for (const loc of requestedLangs) { Can we use locale rather than the shorter loc for clarity? Here and elsewhere. ::: intl/l10n/L10nRegistry.jsm:102 (Diff revision 7) > + updateSource(source) { > + if (!this.sources.has(source.name)) { > + throw new Error(`Source with name "${source.name}" is not registered.`); > + } > + this.sources.set(source.name, source); > + }, Shouldn't we send the available-locales-changed notification here? ::: intl/l10n/L10nRegistry.jsm:104 (Diff revision 7) > + throw new Error(`Source with name "${source.name}" is not registered.`); > + } > + this.sources.set(source.name, source); > + }, > + > + removeSource(sourceId) { Elsewhere this is source.name, can we use sourceName here? ::: intl/l10n/L10nRegistry.jsm:106 (Diff revision 7) > + this.sources.set(source.name, source); > + }, > + > + removeSource(sourceId) { > + this.sources.delete(sourceId); > + }, Shouldn't we send the available-locales-changed notification here? ::: intl/l10n/L10nRegistry.jsm:134 (Diff revision 7) > + > +/** > + * This function generates an iterator for MessageContexts for a given > + * list of resIds for all possible combinations of sources. > + */ > +function* generateContextsForLocale(loc, sourcesOrder, resIds, resolvedOrder = []) { I find this function very difficult to read. I think it could be done simpler but I need to think on it a little more. ::: intl/l10n/L10nRegistry.jsm:141 (Diff revision 7) > + const resourcesLength = resIds.length; > + > + // Inside that loop we have a list of resources and the sources for them, like this: > + // ['test.ftl', 'menu.ftl', 'foo.ftl'] > + // ['app', 'platform', 'app'] > + for (const sourceId of sourcesOrder) { s/sourceId/sourceName/ ::: intl/l10n/L10nRegistry.jsm:144 (Diff revision 7) > + // ['test.ftl', 'menu.ftl', 'foo.ftl'] > + // ['app', 'platform', 'app'] > + for (const sourceId of sourcesOrder) { > + const order = resolvedOrder.concat(sourceId); > + > + if (order.some((id, idx) => L10nRegistry.sources.get(id).hasFile(loc, resIds[idx]) === false)) { By this point don't you already know that resolvedOrder contains sources that you want, the only thing you need to check is if sourceId has resIds[resolvedOrder.length] in it. ::: intl/l10n/L10nRegistry.jsm:165 (Diff revision 7) > +async function generateContext(loc, sourcesOrder, resIds) { > + const ctxId = generateContextID(loc, sourcesOrder, resIds); > + if (!L10nRegistry.ctxCache.has(ctxId)) { > + const ctx = new MessageContext(loc); > + for (let i = 0; i < resIds.length; i++) { > + const data = await L10nRegistry.sources.get(sourcesOrder[i]).fetchFile(loc, resIds[i]); We load data from multiple resourceIds, what is to stop them from having conflicting keys? ::: intl/l10n/L10nRegistry.jsm:169 (Diff revision 7) > + for (let i = 0; i < resIds.length; i++) { > + const data = await L10nRegistry.sources.get(sourcesOrder[i]).fetchFile(loc, resIds[i]); > + if (data === null) { > + return false; > + } > + ctx.addMessages(data); I was rather hoping that we'd be doing the parsing asynchronously too, otherwise we run the risk of janking the main thread when parsing a long file on a low-end machine. ::: intl/l10n/L10nRegistry.jsm:181 (Diff revision 7) > +/** > + * This is a basic Source for L10nRegistry. > + * It registers its own locales and a pre-path, and when asked for a file > + * it attempts to download and cache it. > + * > + * The Source caches the downloaded files so any consequitive loads will consecutive ::: intl/l10n/L10nRegistry.jsm:238 (Diff revision 7) > + * for sources that can provide the list of files available in the source. > + * > + * This allows for a faster lookup in cases where the source does not > + * contain most of the files that the app will request for (e.g. an addon). > + **/ > +class IndexedFileSource extends FileSource { Shouldn't this cache the files too? ::: intl/l10n/L10nRegistry.jsm:277 (Diff revision 7) > +/** > + * We keep it as a method to make it easier to override for testing > + * purposes > + **/ > +L10nRegistry.load = async function(url) { > + return fetch(url).then(data => data.text()).catch(e => undefined); Nit: trailing white space ::: intl/l10n/L10nRegistry.jsm:284 (Diff revision 7) > + * It should be placed in the chain before any others calls to > + * L10nRegistry. Rather than enforcing this constraint on callers can we just enforce it in the API itself? ::: intl/l10n/test/test_l10nregistry.js:20 (Diff revision 7) > + equal(typeof L10nRegistry.getAvailableLocales, "function"); > + equal(typeof L10nRegistry.registerSource, "function"); > + equal(typeof L10nRegistry.updateSource, "function"); > +} > + > +function test_methods_calling(L10nRegistry) { Please use add_task and an async function here.
Attachment #8844352 - Flags: review?(dtownsend)
We really need many many more tests than there are here. Is there a test suite running elsewhere for this?
Flags: needinfo?(gandalf)
> So this returns an iterator over promises that the caller has to await on individually to get each context. What benefit is there to that over just returning a single promise that resolves to an array of contexts? The core idea behind L10nRegistry is that it resolves lazily. Single promise would mean that we generate all contexts at once (which may be quite a large number of permutations of sources and locales). Using a generator means that in most common case, we just create one. Once we enable async iterators, we'll just return an async iterator instead of iterator of promises, and it should make the code look slightly easier :) Do you agree it makes sense to go for (async) generator here? > We really need many many more tests than there are here. Is there a test suite running elsewhere for this? Most of the tests are here: https://github.com/zbraniecki/L10nRegistry/tree/master/test I'll migrate them all into the tree.
Flags: needinfo?(gandalf) → needinfo?(dtownsend)
(In reply to Zibi Braniecki [:gandalf][:zibi] from comment #18) > > So this returns an iterator over promises that the caller has to await on individually to get each context. What benefit is there to that over just returning a single promise that resolves to an array of contexts? > > The core idea behind L10nRegistry is that it resolves lazily. Single promise > would mean that we generate all contexts at once (which may be quite a large > number of permutations of sources and locales). > Using a generator means that in most common case, we just create one. > > Once we enable async iterators, we'll just return an async iterator instead > of iterator of promises, and it should make the code look slightly easier :) > > Do you agree it makes sense to go for (async) generator here? So I'm thinking of the developer ergonomics of using this API. Let's say I'm the download manager code and I want to get 10 or so strings for a locale. Am I calling L10nRegistry directly or is there some intermediate API that will wrap this to make it different? If I'm calling it directly here is how it goes: For each string I have to request the context iterator and async iterate through them all to find the first one that has the string I want. Once most of the contexts have been loaded that's a lot of event loop spinning for not much purpose. A wrapping API could make it a bit nicer but only if you have all the strings you want to get in a single place up-front. The alternative where we async load all the contexts up front looks like this: Request contexts. For each string iterate the array of contexts to find the first one that has the string. This does all the async work up-front after which I can cache the contexts and anytime I need a new string iterate them to find it. A wrapping API even makes it so I don't have to do the iteration myself.
Flags: needinfo?(dtownsend) → needinfo?(gandalf)
> So I'm thinking of the developer ergonomics of using this API. Let's say I'm the download manager code and I want to get 10 or so strings for a locale. Am I calling L10nRegistry directly or is there some intermediate API that will wrap this to make it different? You're not calling it directly. MessageContext is the low level sync API that doesn't do any I/O. L10nRegistry is a pretty-sophisticated L10n-specific I/O manager for Gecko which replaces ChromeRegistry and handles langpacks etc. But both of them are low-level and only expected to be used by developers via a front facing async L10n API. The first, vanilla JS API is waiting in bug 1347800, and on top of it there's DOMLocalization (bug 1347799). The vanilla will work sth like this: ``` let l10n = new Localization(['/browser/menu.ftl', '/platform/errors.ftl']); let msg = await l10n.formatValue('open-file'); ``` It will use LocaleService to negotiate correct languages, L10nRegistry to retrieve the resources and MessageContext to format them. The DOM API extends Localization class and allows for operations like "attachRoot" (to start observing `data-l10n-id` attribute changes), "translateDocument", "translateFragment" etc. The final piece of the puzzle (for vanilla HTML/XUL+JS) is a small per-document l10n.js (bug 1347798) which collects resources to be fetched from <head> and creates `document.l10n` which is an instance of DOMLocalization. The developer will end up just attaching <script type="application/javascript" src="chrome://global/content/l10n.js" /> to the document and then the document will be localized dynamically. From JS the developer will be able to call: ``` let msg = await document.l10n.formatValue('open-file'); ``` You can see an example of a patch that switches a single XUL file to use the new API to localize a single DOM element here: https://bitbucket.org/zbraniecki/mozilla-unified/commits/f9da092141cfc00b0245c5d5df4c19808c4d6603 So, to sum it up. This API is low level. It allows us to stay modular, so that MessageContext and L10nRegistry can be plugged into FluentReact for React, or FluentVue for Vue, and it plugs into Localization/DOMLocalization for vanilla HTML/XUL+JS localization. > Once most of the contexts have been loaded that's a lot of event loop spinning for not much purpose. In most cases we hope to never get "deeper" than one context. So all the remaining contexts are fallbacks that just get triggered when an error happens. In l10n.js code we use cacheIterator which allows us to reuse already resolved MessageContext objects and only call `next` on the iterator when we step "deeper" than what's cached. We believe that using an async generator here is the right solution and I hope you can see now that the developer-facing API is not complex. I'm applying the rest of your feedback right now, but let me know if this convinces you that the async generator is OK to use here :)
Flags: needinfo?(gandalf)
Flags: needinfo?(dtownsend)
p.s. You can read more about the API of Localization and DOMLocalization here: https://github.com/projectfluent/fluent.js/tree/master/fluent-dom I'd like to make the "generateMessages" argument to constructor of both be optional in Gecko and just use L10nRegistry.generateMessages when nothing else is passed.
If this is the API that you want to build your libraries on top of then that's ok. I'm not entirely convinced but we can still change things later thanks to not having to rely on extension compatibility!
Flags: needinfo?(dtownsend)
(In reply to Zibi Braniecki [:gandalf][:zibi] from comment #20) > From JS the developer will be able to call: > > ``` > let msg = await document.l10n.formatValue('open-file'); > ``` This worries me a little. Having to await for every string may be problematic for lots of code. I hope we don't end up having to change that.
Comment on attachment 8844352 [details] Bug 1333980 - Introduce L10nRegistry.jsm. https://reviewboard.mozilla.org/r/117840/#review162188 > By this point don't you already know that resolvedOrder contains sources that you want, the only thing you need to check is if sourceId has resIds[resolvedOrder.length] in it. woa. Thank you. I completely missed it! : > I was rather hoping that we'd be doing the parsing asynchronously too, otherwise we run the risk of janking the main thread when parsing a long file on a low-end machine. In profiling so far (and we're profiling a conservative scenario with browser.xul on the startup path and the biggest l10n resource file in Firefox) we didn't see it affecting the perf too much. Is that ok if I file a follow-up to experiment with worker here? > Shouldn't this cache the files too? Not sure. We're caching contexts already, so if the same context is opened again (say, new window, or reopen preferences document), it'll be ready. Caching documents would mean that we cache both parser and unparsed data which seems like a higher cost. Caching just the information about whether the file is present allows us to skip attempts to generate consequitive permutations for this file for this source. I can see an argument that maybe we should cache just ASTs, and create a new MessageContext for each new document but just fill it with data from the already cached ASTs. That would also prevent two cases of a single document from sharing the MessageContext which may be important for security? What do you think? > Rather than enforcing this constraint on callers can we just enforce it in the API itself? because of timing. Currently we can trigger the bootstrap function ASAP to trigger the fetch on the multilocale.json. But generateContexts can only happen once we know what resources the document asks for. If I turned L10nRegistry.generateContexts to be an async function that performs bootstrap and then returns the iterator, I'd be only able to trigger the bootstrap after the first document is loaded. Does it seem reasonable?
> If this is the API that you want to build your libraries on top of then that's ok. I'm not entirely convinced but we can still change things later thanks to not having to rely on extension compatibility! yes! I don't want to land it until we have one more patch reviewed by you (the Localization API - bug 1347800). I just keep them in separate bugs to keep the review process for each API separate. I believe that in the process of reviewing the Localization and DOM Localization API, we can fine tune the logic of MessageContext and L10nRegistry. Only once we have DOMLocalization and Localization reviewed and landed will we start experimenting with using it for anything. I want to make sure that the whole API cluster is complete and get your review. > This worries me a little. Having to await for every string may be problematic for lots of code. I hope we don't end up having to change that. That's a very conscious decision on our side that feels awkward because historically, L10n APIs were happily synchronous. Fluent changes this paradigm to allow for multiple new features to happen, including async I/O (including fallback). The bottom line is that if we want the I/O to be async, then there has to be an async function that guarantees that the resources are loaded. While working on Firefox OS where we also pushed through the switch from sync to async, we experimented with multiple models, including sth like: document.l10n.formatMessages(['message1', 'message2', 'message3']).then(([value1, value2, value3]) => { ... }); and then you create scopes in which the three messages are formatted. That was a bit hard for people to work with and ensure, so now we basically do the same but with lower cognitive overload on the engineers since we don't ask them to encapsulate their code into our scope. We keep the scope as internal state of the Localization API. Technically, it's not impossible to introduce a sync variant. It would require us to add a few functions like `document.l10n.formatMessageSync` which would use `formatWithFallbackSync` which would call a sync `L10nRegistry.generateContextsSync`. That requires us to resolve contexts synchronously including fallbacks (or remove fallbacks). It's not hard for us to add, but we'd like to try to keep it async until we find a scenario where it's necessary. The good news is that based on our experience from Firefox OS, we actually learned that in principle, there's no reason for L10n to be synchronous. L10n is about displaying, and displaying UI is asynchronous in nature. In terms of experience of using the API, we heavily incentivize people to use our DOM declarative API, so 95% of time you localize by doing: <h1 data-l10n-id="message1"/> or: document.l10n.setAttributes(h1, 'message1', { user: "John" }); rather than calling formatValue manually. In cases where we encountered uses of JS calls, they're usually used to format a message before sending outside of the system (bluetooth trigger, notification etc.) in which case things are asynchronous anyway. In short - I'd like to stick to async for now, and consider adding high-level API variant for synchronous localization only when we encounter a use case that requires it. Our team (:stas, :pike and me) are aware that this is a paradigm shift and we design Fluent to be able to add sync variant without much pain by just adding *Sync versions of several functions on the path from I/O to formatValue. Does it work for you for now?
(In reply to Zibi Braniecki [:gandalf][:zibi] from comment #25) > Comment on attachment 8844352 [details] > Bug 1333980 - Introduce L10nRegistry.jsm. > > https://reviewboard.mozilla.org/r/117840/#review162188 > > > By this point don't you already know that resolvedOrder contains sources that you want, the only thing you need to check is if sourceId has resIds[resolvedOrder.length] in it. > > woa. Thank you. I completely missed it! : > > > I was rather hoping that we'd be doing the parsing asynchronously too, otherwise we run the risk of janking the main thread when parsing a long file on a low-end machine. > > In profiling so far (and we're profiling a conservative scenario with > browser.xul on the startup path and the biggest l10n resource file in > Firefox) we didn't see it affecting the perf too much. Is that ok if I file > a follow-up to experiment with worker here? That's fine. I wonder if we should also add some telemetry recording how long it takes to parse files. The larger that is the more jank we'd see and telemetry could tell us that we have a problem. > > Shouldn't this cache the files too? > > Not sure. We're caching contexts already, so if the same context is opened > again (say, new window, or reopen preferences document), it'll be ready. > Caching documents would mean that we cache both parser and unparsed data > which seems like a higher cost. Caching just the information about whether > the file is present allows us to skip attempts to generate consequitive > permutations for this file for this source. > > I can see an argument that maybe we should cache just ASTs, and create a new > MessageContext for each new document but just fill it with data from the > already cached ASTs. > That would also prevent two cases of a single document from sharing the > MessageContext which may be important for security? > > What do you think? I don't have a strong opinion here. Let's see how this works out. > > Rather than enforcing this constraint on callers can we just enforce it in the API itself? > > because of timing. Currently we can trigger the bootstrap function ASAP to > trigger the fetch on the multilocale.json. But generateContexts can only > happen once we know what resources the document asks for. > > If I turned L10nRegistry.generateContexts to be an async function that > performs bootstrap and then returns the iterator, I'd be only able to > trigger the bootstrap after the first document is loaded. I'm not sure why that is the case. You could still trigger the bootstrap early but just have generateContexts await it finishing. The alternative is you have to remember to await on the bootstrap for every caller just in case. (In reply to Zibi Braniecki [:gandalf][:zibi] from comment #26) > Our team (:stas, :pike and me) are aware that this is a paradigm shift and > we design Fluent to be able to add sync variant without much pain by just > adding *Sync versions of several functions on the path from I/O to > formatValue. > > Does it work for you for now? Yes, let's see how this goes.
> That's fine. I wonder if we should also add some telemetry recording how long it takes to parse files. The larger that is the more jank we'd see and telemetry could tell us that we have a problem. Sure! I'll file the bugs. > I'm not sure why that is the case. You could still trigger the bootstrap early but just have generateContexts await it finishing. The alternative is you have to remember to await on the bootstrap for every caller just in case. Good point. I tried to do that, but in the process came to conclusion that the "bootstrap" part is not really part of the L10nRegistry API. It's just something we want to have at runtime so that L10nRegistry has data. So instead of trying to jam it into L10nRegistry and in the process turn all the APIs that call L10nRegistry.generateContexts to be async, I just moved it outside of L10nRegistry and into the runtime code (which is bug 1347798). Once we get to reviewing that, you may be able to help me find a better place to initialize that code. For now, this makes L10nRegistry cleaner and more pure since there's no implicitly running code in it. :)
Filed bug 1384235 and 1384236 and set them as blocking bug 1365426.
Attachment #8844352 - Flags: review?(dtownsend) → review+
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla57
Flags: needinfo?(gandalf)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: