Closed Bug 945152 Opened 11 years ago Closed 10 years ago

Make memory of TypedArrays returned by XHR backing with the local file

Categories

(Core :: DOM: Core & HTML, defect)

All
Gonk (Firefox OS)
defect
Not set
normal

Tracking

()

RESOLVED FIXED
blocking-b2g -

People

(Reporter: sinker, Assigned: swu)

References

(Depends on 1 open bug)

Details

Attachments

(7 files, 54 obsolete files)

(deleted), patch
sfink
: review+
Details | Diff | Splinter Review
(deleted), patch
swu
: review+
Details | Diff | Splinter Review
(deleted), patch
swu
: review+
Details | Diff | Splinter Review
(deleted), patch
bugzilla
: review+
Details | Diff | Splinter Review
(deleted), patch
fabrice
: review+
smaug
: review+
Details | Diff | Splinter Review
(deleted), patch
smaug
: review+
Details | Diff | Splinter Review
(deleted), patch
smaug
: review+
Details | Diff | Splinter Review
The story: The keyboard apps would load their massive tables as TypedArrays returned by XHR. These tables may be several MBs; for ex. Chinese. They are memory eating monsters. Solution: Since these files are usually on the device, we can map the content from the file with private mapping. So, the kernel would reclaim pages occupied by the table for the foreground App.
Yup, this is a good idea and actually not crazy difficult. The main problem is that TypedArrays are read/write by default, so that kinda sucks. Luke can probably help with some design ideas.
That's a cool idea. By mmap/MapViewOfFile-ing with MAP_PRIVATE/FILE_MAP_COPY, I don't think the mutability of typed arrays would be a problem, but maybe I'm missing a point. We already have a case where the ArrayBuffer points to a specially-mapped/released buffer (asm.js-linked ArrayBuffers). One challenge though is that we currently assume that the mmap'ed buffer contains a special struct immediately preceding the contents of the ArrayBuffer (and this struct is dynamically written). We might be able to work around this, though, by having obj->slots point to the ObjectElements (does this make sense sfink)?
Actually mmap won't work here. The file we are trying to map will be inside a compressed package (zip). So we could stick with the current approach of having memory allocated and we wire it up to the OOM hook internally and free it and unset the read bit on the pages to force a page fault which we will then detect and issue a sync I/O for. Would that work?
That sounds possible, although the use of signal handling always makes things much harder and more dangerous. Another thought is to standardize/implement ArrayBuffer.discard (madvise(MADV_DONTNEED); discussion in bug 855669) and have JS discard its own pages. After discard, the memory is logically zero'd (if it were to be read again) which avoids the need for signal handlers entirely. It seems like this approach could allow us to free memory even more eagerly than OOM, although it doesn't help with virtual address space utilization. One last thought: I heard from Jonas that there is a stream proposal in progress; I think streams are intended to solve use cases exactly like this. I'm pretty far out of the loop on it, but has it been considered? Perhaps ArrayBuffer.discard would be the better short-term solution.
They want a long lived buffer that can reload from disk instead of blocking physical pages.
(In reply to Andreas Gal :gal from comment #5) > They want a long lived buffer that can reload from disk instead of blocking > physical pages. If there are/were memory pressure events or other hints then the app could cooperatively discard the array, or a part of the array, and re-loaded the content when needed. btw the ArrayBuffer.discard patch aligns the start of the elements to a page boundary, and this might help mapping in a file or with some other scheme.
Oftenly, OOM comes up too soon after low memory pressure to free memory. cooperatively discarding may not work very well, we should check it. Back to mmap() solution. Since most data of this type stays long on the device, it is more easy to implement with decompressing files from zip files before mapping to the process of first time. For the preceding struct, the space can be preserved during decompression. But, the size of the struct may vary by revision, we can do some assertion to make it early failed, or move the content to a separated space with a pointer to it. The decompression is one time work, but make all our implementation far more easy.
Another solution is volatile range at bug 883253. It is designed to handle data of this type.
Some thought of mmap() solution. If we want to map a file "bigtable.data" in preinstalled.pkg, following steps can be do. 1. if preinstalled.pkg::bigtable.data is there, map the file to a buffer and return 2. create preinstalled.pkg::bigtable.data 3. mmap the file as read-write shared mapping and NOSYNC (write back). 4. decompress bigtable.data to the buffer. 5. map the file again to another buffer as private mapping. 6. unmap the first buffer. With this procedure, a lower I/O pressure could be expected for the use of first time.
zip supports store/plain. That is, to archive the files uncompressed. If we can tolerate some size increment in the storage, we can adjust the build script a little bit and map data of interest directly.
(In reply to Ting-Yuan Huang from comment #10) > zip supports store/plain. That is, to archive the files uncompressed. If we > can tolerate some size increment in the storage, we can adjust the build > script a little bit and map data of interest directly. What would be the expected percentage of growth in terms of size? It sounds a reasonable trade-off since we're more short of RAM than flash.
Note that we already had issues with keyboard dictionary sizes. Needinfo on David that can tell you the story!
Flags: needinfo?(dflanagan)
Might there be an opportunity here to add some more general support for mmapping file ranges into typed arrays? It might also be a big win for games with very large asset files. E.g. ArrayBuffer.mapFile(file?, start, size, offset) * file? - a range of sources that could support direct mapping, or even just optimize the speed of the operation. * start, size - the array buffer range. * offset - the start offset in the file.
(In reply to Fabrice Desré [:fabrice] from comment #12) > Note that we already had issues with keyboard dictionary sizes. Needinfo on > David that can tell you the story! Since bug 884752 we only include a certain set of keyboard layouts in the build. Disk consumption should be a migrate-able issue than memory consumption right? :) (In reply to James Ho from comment #11) > What would be the expected percentage of growth in terms of size? It sounds > a reasonable trade-off since we're more short of RAM than flash. The size grow should be really small since the these binary dictionaries are already encoded/compressed. (In reply to Ting-Yuan Huang from comment #10) > zip supports store/plain. That is, to archive the files uncompressed. If we > can tolerate some size increment in the storage, we can adjust the build > script a little bit and map data of interest directly. Yes, Gaia build script could do that (not to compress certain files). We could ask the build script to parse a metadata manifest for that. However the behavior of Gecko will almost be hidden to web developers, making this feature hard to target, and arguably unreliable. I would recommend we add a console warning if the file loaded can be optimized but fall out of optimization for some reason, and link to this bug. If there isn't a console warning, we might as well as unzip the file to /tmp and use it. Please don't implement some feature that ended up being a little secret between people in this bug.
FYI responseType='blob' already supports local-file-backed blob.
(In reply to Fabrice Desré [:fabrice] from comment #12) > Note that we already had issues with keyboard dictionary sizes. Needinfo on > David that can tell you the story! The dictionary files live in gaia/keyboard/dictionaries. The dictionaries there now take up 55mb. If I put them in a directory and zip the directory, I get a zip file that is 37mb. So the compressed dictionaries are about 2/3rds the size of uncompressed dictionaries. Or to turn that around, if we leave the dictionaries uncompressed in the zip file, they will take 50% more space than they do now. That doesn't seem a terribly large price to pay. Note, however, that the latin input method discards its dictionary file after 30s of inactivity (or at least it used to... I assume that still works). If the keyboard is showing, then obviously the dictionary needs to be swapped in. (Or we will page thrash on every keystroke) If they keyboard is not showing, then most of the time the dictionary will have already been freed and this won't be an issue. I don't know if the Chinese input methods do this, but if they do, or are modified to do it, then do we really need the memory mapping scheme? If 30 seconds is too long, we can always try something shorter.
Flags: needinfo?(dflanagan)
I should add that using mmap() seems like a great idea. I just wanted to point out above that if the immediate need is to reduce keyboard memory usage, modifying the asian im's to explicitly free their dictionaries when not in use might be a quicker fix that could get into 1.3. It also occurs to me that we should verify that the latin IM still frees its dictionaries after 30 seconds of inactivity. If the switch to the 3rd party keyboard framework has broken that code, we ought to fix it (and koi+ the fix, too!)
Whiteboard: [tarako]
(In reply to David Flanagan [:djf] from comment #16) > I don't know if the Chinese input methods do this, but if they do, or are > modified to do it, then do we really need the memory mapping scheme? FYR Pinyin and Zhuyin drop the worker containing the dictionary entirely after 600 and 60 sec, respectively. Developers simply made a intuitive decision here. We could certainly change, even customize that.
Blocks: 128RAM
blocking-b2g: --- → 1.3?
This is not a regression and shouldn't block 1.3 this late in the game. Lets fix this on trunk and we can see when and whee to uplift it to.
triage: this is needed for tarako device (or device of similar size memory) but this should not be uplifted to 1.3. Using [NO_UPLIFT] to create a query of bugs that needs to happen on 1.3t branch if it gets created
blocking-b2g: 1.3? → 1.3+
Whiteboard: [tarako] → [tarako][NO_UPLIFT]
assign to ting-yuan for now
Assignee: nobody → thuang
I like the idea of storing an uncompressed version of the dictionary in the .zip file and then mmap'ing that. However how do we deal with the fact that ArrayBuffers are always read/write? Could we do a copy-on-write? Or could we add support for read-only ArrayBuffers that throw when written to? I think it depends on what our ArrayBuffer implementation looks like, which lives in the JS-engine. If we can add a read-only ArrayBuffer then adding something like XHR.responseType="readonly-arraybuffer" could handle the API side of things. We'd probably want to put a warning in the developer console if someone does a load of a packaged-app resource using .responseType="readonly-arraybuffer" but the resource is compressed in the zipfile.
(In reply to Jonas Sicking (:sicking) needinfo? or feedback? encouraged. from comment #22) > However how do we deal with the fact that ArrayBuffers are always > read/write? Could we do a copy-on-write? If we mmap/MapViewOfFile-ing with MAP_PRIVATE/FILE_MAP_COPY, then the OS gives us copy-on-write for free.
I would prefer to have a read only ArrayBuffer (it's something we've needed elsewhere too), as to reduce the risk that a small developer error silently grows the app memory usage by huge amounts. But that might be too hard politically/spec-wise? I'll defer to JS folks here.
hi Swu, will you be able to help looking into this bug? thanks
Flags: needinfo?(swu)
> hi Swu, will you be able to help looking into this bug? thanks Yes, assigning to me.
Assignee: thuang → swu
Flags: needinfo?(swu)
Given the urgency here I propose we don't rely on any new API. So I propose that we use something like: If a page does an XHR load that has set .responseType="arraybuffer", and that load is from an app:// URL, and the resource in question is not compressed (i.e. was added to the .zip using "store" rather than compress) that we simply mmap the ArrayBuffer. But we'll have to use copy-on-write so that the buffer can still be modified. Otherwise we risk breaking applications that aren't aware that we're using mmap. Additionally, we could use mmap in the following condition: If a page uses FileReader.readAsArrayBuffer(blob), and the Blob is coming from IndexedDB (you can check this using Blob.getFileInfo I think) then we can also use mmap. Here too we need to use copy-on-write to not break existing code. The latter allows an app to compress the dictionary in the app-package to allow for smaller downloads, but then still allow memmapping by decompressing into IndexedDB before reading from it.
Attached patch (WIP) Patch: mmap local file to ArrayBuffer (obsolete) (deleted) — Splinter Review
Initial patch that maps to a local uncompressed file to ArrayBuffer.
Btw, please check with Jan Varga how to see if a Blob is coming from IDB or not.
You also can't munmap when the XHR object goes away. That can be before the ArrayBuffer goes away. You need to munmap when the ArrayBuffer is GCed.
(In reply to Shian-Yow Wu [:shianyow] from comment #28) > Created attachment 8350614 [details] [diff] [review] > (WIP) Patch: mmap local file to ArrayBuffer > > Initial patch that maps to a local uncompressed file to ArrayBuffer. The buffer object will need a header before the mapped data, and the data will need to be page aligned. Perhaps the area could be pre-allocated, plus a page for the header, then the data mapped with an offset of one page from the start giving room to add the header. Perhaps an alternative strategy would be to require an existing buffer into which the file is mapped. This would also be useful for asm.js code which could then avoid copying the data into its heap buffer. There is a patch in bug 855669 that page aligns the array buffer elements that might help, shall rebase this soon.
hi Shian-yow, minding providing further update on this? thanks
Flags: needinfo?(swu)
(In reply to Joe Cheng [:jcheng] from comment #32) > hi Shian-yow, minding providing further update on this? thanks We are targeting to support the following scenarios: 1. If the file is compressed, uncompress it locally then map to the uncompressed file. 2. If the file was added to package using "store" rather than compress, map it to the offset of file in the package. Scenario 1 has been implemented and under testing. There is a crash issue when the ArrayBuffer been GCed, which is also under checking.
Flags: needinfo?(swu)
The WIP patch suppoed uncompressing target file locally for memory mapping. Need to munmap when ArrayBuffer been GCed and solve the crash issue.
Attachment #8350614 - Attachment is obsolete: true
s/suppoed/supported/
shianyow, Please confirm that we indeed need to store data in zip uncompressed. That require a Gaia build script change. Your WIP patch suggests otherwise (because it's WIP?) If so, please look :yurenju and file bug. Thanks!
Flags: needinfo?(swu)
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #36) > shianyow, > > Please confirm that we indeed need to store data in zip uncompressed. That > require a Gaia build script change. Your WIP patch suggests otherwise > (because it's WIP?) > > If so, please look :yurenju and file bug. Thanks! Yes, because it's WIP. Later version will support both cases, so Gaia build script change is preferred but not mandatory.
Flags: needinfo?(swu)
(In reply to Shian-Yow Wu [:shianyow] from comment #37) > Yes, because it's WIP. > Later version will support both cases, so Gaia build script change is > preferred but not mandatory. I've filed bug 957497. Thanks!
The patch supports: 1. For compressed file in package, uncompress it into /data/local/tmp and map ArrayBuffer to the uncompressed file. 2. For uncompressed(stored) file in package, map ArrayBuffer to offset of target file in package. 3. Correctly munmap when ArrayBuffer been GCed, which fixed the crash issue in previous patch. Note: - Currently the mapping only supports B2G platform. - The latest B2G gaia code enabled keyboard OOP by default, we need keyboard running in chrome process to make use of this feature. To disable keyboard OOP: 1) Set "keyboard.3rd-party-app.enabled" to false in "gaia/build/config/common-settings.json". 2) Then "make reset-gaia" in gaia directory.
Attachment #8355500 - Attachment is obsolete: true
Attachment #8357695 - Flags: feedback?(thuang)
/data/local/tmp is world rw iirc. That needs sec review. Also limiting that to the parent process is really unfortunate.
Flags: needinfo?(ptheriault)
(In reply to Fabrice Desré [:fabrice] from comment #40) > /data/local/tmp is world rw iirc. That needs sec review. Also limiting that > to the parent process is really unfortunate. Allow me to make it more clearly on limitation of this version. If keyboard running as OOP: For case 1 there are some limitation. When keyboard process restarted, it cannot access previously decompressed file. This can be fixed. For case 2, accessing file stored in package as uncompressed is OK.
Looks good to me and only two questions: 1. It occurs to me that, should ArrayBuffer be aligned to some extent (4 or 8 or...) for JIT? How is this related to AsmJSArrayBuffer? 2. It looks like that when a local ArrayBuffer can't be mapped, the XHR would fail. Can we then just fallback to the old way? and some nitpicks: 1. When calculating the end of mapped address by ((aOffset + aSize) & ~(page_size - 1)) + page_size, there is a chance that a page is wasted: when the file end is already aligned. (aOffset + aSize + page_size - 1) & ~(page_size - 1) handles the corner case. 2. Is it possible to delegate the system dependent operations, mmap/munmap/open/etc, to channels? 3. Should the checks in setCapacity() and Append() in ArrayBufferBuilder be assertions? 4. I spent hours more than expected to check the alignments and sizes. It would be great to have a comprehensive comment :-)
Flags: needinfo?(ptheriault)
Attachment #8357695 - Flags: feedback?(thuang) → feedback+
Flags: needinfo?(ptheriault)
Sorry, accidentally removed the needinfo. Reflagging it for comment 40.
(In reply to Shian-Yow Wu [:shianyow] from comment #39) > Created attachment 8357695 [details] [diff] [review] > Patch: mmap ArrayBuffer to compressed or stored file in package ... > 3. Correctly munmap when ArrayBuffer been GCed, which fixed the crash issue > in previous patch. For bug 855669, buffer.discard(), the typed arrays are expected to be aligned so that the start of the array elements occurs on a page boundary. It looks simple to modify mapToFile to align the elements, and might you consider doing this now? I expect buffer.discard() would only be useful if the array were written to and became dirty, and this might not be a common usage, so perhaps supporting buffer.discard() well for these mapped files could be ignored.
(In reply to Ting-Yuan Huang from comment #42) > Looks good to me and only two questions: > 1. It occurs to me that, should ArrayBuffer be aligned to some extent (4 or > 8 or...) for JIT? How is this related to AsmJSArrayBuffer? Good point. For stored file in zip package, it is possible the starting possition not aligned to 4 byte. > > 2. It looks like that when a local ArrayBuffer can't be mapped, the XHR > would fail. Can we then just fallback to the old way? When mapping failed, the mIsMappedArrayBuffer will be false, and will fallback to the old way. > > and some nitpicks: > 1. When calculating the end of mapped address by ((aOffset + aSize) & > ~(page_size - 1)) + page_size, there is a chance that a page is wasted: when > the file end is already aligned. (aOffset + aSize + page_size - 1) & > ~(page_size - 1) handles the corner case. > 2. Is it possible to delegate the system dependent operations, > mmap/munmap/open/etc, to channels? > 3. Should the checks in setCapacity() and Append() in ArrayBufferBuilder be > assertions? > 4. I spent hours more than expected to check the alignments and sizes. It > would be great to have a comprehensive comment :-) Thank you for the comments.
(In reply to Douglas Crosher [:dougc] from comment #44) > (In reply to Shian-Yow Wu [:shianyow] from comment #39) > > Created attachment 8357695 [details] [diff] [review] > > Patch: mmap ArrayBuffer to compressed or stored file in package > ... > > 3. Correctly munmap when ArrayBuffer been GCed, which fixed the crash issue > > in previous patch. > > For bug 855669, buffer.discard(), the typed arrays are expected to be > aligned so that the start of the array elements occurs on a page boundary. > It looks simple to modify mapToFile to align the elements, and might you > consider doing this now? Yes, thank you for the comment. Does this mean we have to pack the zip in a way that target file stored in zip package is page aligned?
(In reply to Shian-Yow Wu [:shianyow] from comment #46) > (In reply to Douglas Crosher [:dougc] from comment #44) > > (In reply to Shian-Yow Wu [:shianyow] from comment #39) > > > Created attachment 8357695 [details] [diff] [review] > > > Patch: mmap ArrayBuffer to compressed or stored file in package > > ... > > > 3. Correctly munmap when ArrayBuffer been GCed, which fixed the crash issue > > > in previous patch. > > > > For bug 855669, buffer.discard(), the typed arrays are expected to be > > aligned so that the start of the array elements occurs on a page boundary. > > It looks simple to modify mapToFile to align the elements, and might you > > consider doing this now? > > Yes, thank you for the comment. > Does this mean we have to pack the zip in a way that target file stored in > zip package is page aligned? Good point. Yes, the start of the source would need to be page aligned in the zip file. Sorry I forgot that extra complexity. Could it be done?
(In reply to Douglas Crosher [:dougc] from comment #47) > Good point. Yes, the start of the source would need to be page aligned in > the zip file. Sorry I forgot that extra complexity. Could it be done? Yes, and I think we have to modify zip tool to do this.
(In reply to Shian-Yow Wu [:shianyow] from comment #48) > (In reply to Douglas Crosher [:dougc] from comment #47) > > Good point. Yes, the start of the source would need to be page aligned in > > the zip file. Sorry I forgot that extra complexity. Could it be done? > > Yes, and I think we have to modify zip tool to do this. Regarding bug 855669, is page alignment of array buffer a must, or it's better to have for efficiency?
(In reply to Shian-Yow Wu [:shianyow] from comment #48) > (In reply to Douglas Crosher [:dougc] from comment #47) > > Good point. Yes, the start of the source would need to be page aligned in > > the zip file. Sorry I forgot that extra complexity. Could it be done? > > Yes, and I think we have to modify zip tool to do this. I've just checked https://en.wikipedia.org/wiki/Zip_%28file_format%29#File_headers and had fun with a zip tool at hand. I think, at worse, we could do it with two passes; if we found the first pass made the data out of alignment, in the second pass, we may add a 0 byte pad file to the zip, with n bytes file name; that would make the entire block of data 30+n bytes long. (unfortunately that would generate 46+n bytes long of dictionary record in the end of the file too)
(In reply to Tim Guan-tin Chien [:timdream] (MoCo-TPE) (please ni?) from comment #50) > (In reply to Shian-Yow Wu [:shianyow] from comment #48) > > (In reply to Douglas Crosher [:dougc] from comment #47) > > > Good point. Yes, the start of the source would need to be page aligned in > > > the zip file. Sorry I forgot that extra complexity. Could it be done? > > > > Yes, and I think we have to modify zip tool to do this. > > I've just checked > https://en.wikipedia.org/wiki/Zip_%28file_format%29#File_headers > and had fun with a zip tool at hand. > > I think, at worse, we could do it with two passes; if we found the first > pass made the data out of alignment, in the second pass, we may add a 0 byte > pad file to the zip, with n bytes file name; that would make the entire > block of data 30+n bytes long. > > (unfortunately that would generate 46+n bytes long of dictionary record in > the end of the file too) The typical page size is 4096 bytes, so we need to align stored file to 4096 bytes. The "extra field" in local file header of a zip file are for purpose of OS-specific attributes. I think we can add pad in extra field.
(In reply to Shian-Yow Wu [:shianyow] from comment #49) > (In reply to Shian-Yow Wu [:shianyow] from comment #48) > > (In reply to Douglas Crosher [:dougc] from comment #47) > > > Good point. Yes, the start of the source would need to be page aligned in > > > the zip file. Sorry I forgot that extra complexity. Could it be done? > > > > Yes, and I think we have to modify zip tool to do this. > > Regarding bug 855669, is page alignment of array buffer a must, or it's > better to have for efficiency? No it is not a requirement. There is no need to hold up this bug. Page alignment can be explored later, for example mapToFile could probably be extended to page align the start of the array elements only when supplied with a page aligned file offset, and then separately a zip extension could be explored to take advantage of this. However, should still check if the smaller alignment questioned in comment 42 is necessary and if so then it might be useful to just page align.
Sorry that I have one question to confirm: When we say page alignment to array buffer elements, do we mean align to header of array buffer, or align to data of array buffer?
(In reply to Shian-Yow Wu [:shianyow] from comment #53) > Sorry that I have one question to confirm: When we say page alignment to > array buffer elements, do we mean align to header of array buffer, or align > to data of array buffer? The alignment in comment 42 is probably referring to the alignment of the start of the header. Need to check, but the header is likely a multiple of 4 or 8 bytes and if the data follows this immediately then it will also be aligned to 4 or 8 bytes. The alignment needed to make buffer.discard() work well is alignment of the start of the elements (after the header). If the header immediately precedes this then it will not be page aligned, but this is not necessary, but the start of header will still be appropriately aligned. It may well be possible to support padding between the header and the elements if really necessary. The discussion in bug 957690 might be of interest for thinking ahead.
No longer blocks: 957497
Blocks: 957497
Could I ask if the Range header needs to be handled? Is requesting a range of bytes supported for app:// URLs?
Honestly, why are you reinventing Blob?
(In reply to Douglas Crosher [:dougc] from comment #55) > Could I ask if the Range header needs to be handled? > > Is requesting a range of bytes supported for app:// URLs? No, there are no headers with app:// urls.
(In reply to Masatoshi Kimura [:emk] from comment #56) > Honestly, why are you reinventing Blob? We are not. This is an optimization to read data as typed arrays from app packages only.
Comment on attachment 8357695 [details] [diff] [review] Patch: mmap ArrayBuffer to compressed or stored file in package Review of attachment 8357695 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/base/src/nsXMLHttpRequest.cpp @@ +3999,5 @@ > + printf_stderr("aFile = %s\n", aFile); > + printf_stderr("aOffset = 0x%x, aSize=%d, aHeaderSize=%d\n", > + aOffset, aSize, aHeaderSize); > + > + int fd = open(aFile, O_RDONLY); This code can be run in a content process, I think? If so, it should use the PRemoteOpenFile IPC protocol instead — that will provide a file descriptor that can be mmap()ed, and access to the app:// space is one of the use cases it already allows.
(In reply to Jed Davis [:jld] from comment #59) > ::: content/base/src/nsXMLHttpRequest.cpp > @@ +3999,5 @@ > > + printf_stderr("aFile = %s\n", aFile); > > + printf_stderr("aOffset = 0x%x, aSize=%d, aHeaderSize=%d\n", > > + aOffset, aSize, aHeaderSize); > > + > > + int fd = open(aFile, O_RDONLY); > > This code can be run in a content process, I think? If so, it should use > the PRemoteOpenFile IPC protocol instead — that will provide a file > descriptor that can be mmap()ed, and access to the app:// space is one of > the use cases it already allows. Yes, the code is targeted to be run in either content or chrome process. Thank you for this information.
PM retriaged this and is questioning whether we need to actually block 1.3 on this. It seems late to block the release on refactoring work.
blocking-b2g: 1.3+ → 1.3?
That should not block 1.3 indeed. We'll cherry pick for tarako.
blocking-b2g: 1.3? → ---
Flags: needinfo?(ptheriault) → sec-review?(ptheriault)
So I guess the main security concern here is getting this to work out-of-process. From comment 59, it sounds like the case when the data is stored uncompressed and we can map the file directly from the app package, but I guess the extract case takes more work (unless there already is a remote API for creating temp files?) Creating a temp file also raises the question or making sure other processes can't modify the temp file while it is mmap'ed (i.e. setting file permissions, assuming there is a remote API for parent to create and open temp file for the child) Finally in the extract case, we also need to avoid "symlink race" style bugs. I.e. if we create our temp file run as root, we may be coerced into overwriting an existing file via a symlink I think? Shian-yow raised this with me, but I am not sure of what the standard approach to safely create a temp file is in Gecko. Maybe someone else can comment on this?
We also need to make sure that we never mmap packages downloaded from the internet, including our own store. If we want to mmap them, we need to mmap a rewritten version. This way, we can guarantee the alignment is correct. And/or, we need to make sure we check the alignment properly before mmapping the package.
(In reply to Paul Theriault [:pauljt] from comment #63) > but I guess the extract case takes more work (unless there already is a > remote API for creating temp files?) No, and designing a general-case IPC interface for that would require some serious thought given to the security implications. In this particular case, the parent could create the temporary file and pass the child a read-only file descriptor to it. More generally, the child could send the zip path and member name, and receive a fd + offset + length, to cover both cases.
(In reply to Brian Smith (:briansmith, :bsmith; NEEDINFO? for response) from comment #64) > We also need to make sure that we never mmap packages downloaded from the > internet, including our own store. If we want to mmap them, we need to mmap > a rewritten version. This way, we can guarantee the alignment is correct. > And/or, we need to make sure we check the alignment properly before mmapping > the package. Does the memory have to be page-aligned? Could we round down the start offset and round up the end offset, mmap that region, and point the TypedArray's elements pointer at the virtual address where the data starts? That will consume at most 4095*2 extra bytes (and note that if the array is discard()ed up to one or both ends, the zeroed area can be extended to the entire page(s)). If the memory *does* have to be page-aligned, could we just not optimize (possibly with some kind of warning) in that case?
(In reply to Jed Davis [:jld] from comment #66) > (In reply to Brian Smith (:briansmith, :bsmith; NEEDINFO? for response) from > comment #64) > > We also need to make sure that we never mmap packages downloaded from the > > internet, including our own store. If we want to mmap them, we need to mmap > > a rewritten version. This way, we can guarantee the alignment is correct. > > And/or, we need to make sure we check the alignment properly before mmapping > > the package. > > Does the memory have to be page-aligned? No, but it may need to be aligned on a word, or double word, boundary. Page alignment will help for future features such as buffer.discard(). If the buffer is used read-only then buffer.discard() might not even be useful. > Could we round down the start > offset and round up the end offset, mmap that region, and point the > TypedArray's elements pointer at the virtual address where the data starts? > That will consume at most 4095*2 extra bytes (and note that if the array is > discard()ed up to one or both ends, the zeroed area can be extended to the > entire page(s)). The current code appears to do this. > If the memory *does* have to be page-aligned, could we just not optimize > (possibly with some kind of warning) in that case? This sounds like a good approach.
(In reply to Paul Theriault [:pauljt] from comment #63) > So I guess the main security concern here is getting this to work > out-of-process. From comment 59, it sounds like the case when the data is > stored uncompressed and we can map the file directly from the app package, > but I guess the extract case takes more work (unless there already is a > remote API for creating temp files?) > > Creating a temp file also raises the question or making sure other processes > can't modify the temp file while it is mmap'ed (i.e. setting file > permissions, assuming there is a remote API for parent to create and open > temp file for the child) > > Finally in the extract case, we also need to avoid "symlink race" style > bugs. I.e. if we create our temp file run as root, we may be coerced into > overwriting an existing file via a symlink I think? Shian-yow raised this > with me, but I am not sure of what the standard approach to safely create a > temp file is in Gecko. Maybe someone else can comment on this? The nsIFile.createUnique() is used for creating unique temp files, but it might not be what we need, we don't want to create files each time. Probably just writing to a non-world writable directory?
(In reply to Jed Davis [:jld] from comment #59) > This code can be run in a content process, I think? If so, it should use > the PRemoteOpenFile IPC protocol instead — that will provide a file > descriptor that can be mmap()ed, and access to the app:// space is one of > the use cases it already allows. Unfurtunately PRemoteOpenFile is not a trivial one line replacement for open(). It is currently supported by nsIJARChannel/nsIJARProtocolHandler. For mmap, we want to open a file instead of opening app in a file. So I think we have to enhance nsIFileChannel/nsIFileProtocolHandler to support opening local file from child process. Can someone comment on whether this is the right track?
(In reply to Shian-Yow Wu [:shianyow] from comment #69) > Unfurtunately PRemoteOpenFile is not a trivial one line replacement for > open(). It is currently supported by nsIJARChannel/nsIJARProtocolHandler. > For mmap, we want to open a file instead of opening app in a file. > So I think we have to enhance nsIFileChannel/nsIFileProtocolHandler to > support opening local file from child process. Can someone comment on > whether this is the right track? Another problem to overcome is that PRemoteOpenFile doesn't support syncronized open, which makes implementation more complicated. Since nsJARChannel already have PRemoteOpenFile support, I'm thinking about the following way to mmap() from child: 1. Enhance nsJARChannel to support getting fd of the zip package. 2. We need nsZipArchive to find offset of target file in zip package, this can be done by enhancing nsZipArchive to open archive by giving an existing fd instead of file name. 3. Then we can do mmap() by fd + offset + length. If it works, we don't have to worry about PRemoteOpenFile support in nsIFileChannel.
This patch can work when keyboard running OOP, although it still uses open() to read files. The PRemoteOpenFile support will be addressed in a separated patch, because it needs more effort. Changes: 1. Addressed feedback in comment 42, except mmap/munmap/open still done in ArrayBufferBuilder instead of channel. 2. Alignment check. It will not map if uncompressed file in zip is not aligned to 4 bytes, and just print warning if not aligned to page size. 3. Retrieve real zip package path from nsIJARChannel. 4. The decompression part is separated to a different patch, as it may need more discussion. 5. Misc fix and cleanup.
Attachment #8357695 - Attachment is obsolete: true
Attached patch Patch: Decompress to local file. (obsolete) (deleted) — Splinter Review
1. Only decompress to local file when running in chrome process. 2. Writing decompressed file to /data/local (which is not world writable).
Attachment #8364959 - Flags: feedback?(thuang)
Attachment #8364959 - Flags: feedback?(dtc-moz)
Attachment #8364959 - Attachment description: Patch: mmap ArrayBuffer to uncompressed/aligned file in zip package → Part 1: mmap ArrayBuffer to uncompressed/aligned file in zip package
Attachment #8364964 - Attachment description: Patch: Decompress to local file. → Part 2: Decompress to local file.
We need test case to check xhr response from app:// URL is correct, and make sure it's mapped. Does anyone have a good idea how to make such kind of test? There was a mochitest in bug 804395, written by baku, which tests the response code for app:// protocol. Maybe we can build some test harness in Gecko and simulate the mapping with jar:// URL, like what was done in bug 804395. Do you have any comments, baku?
Flags: needinfo?(amarchesini)
Shian-Yow, you can also test with installing a packaged app, loading it and accessing some context through xhr. See the tests under dom/apps for some inspiration.
> 804395. Do you have any comments, baku? If I remember correctly, app:// and jar:// are managed at the same way. Just do a XHR opening a zip(jar) file using jar://example.org/patch/of/your/file.jar/content.something
Flags: needinfo?(amarchesini)
What is the strategy with regards to where we put the decompressed file? And when do we delete it? Does the decompressed file live somewhere which is world writable, or does only the parent process has access to write to said directory. A simpler solution to implement than automatically decompressing files is to only support files that are stored (rather than compressed) in the .zip. But then also support mmapping when loading from Blobs that are stored in IDB. This way the app can itself load the file from the app:// package and store in IDB and then load from there. That way it can be sure to get a mmap.
Why do we have to decompress the file? zip can contain uncompressed files
The current patch handles fine the scenario when the zip contains an uncompressed file. I.e. no new files are created in that scenario. However the patch also handles the scenario when you load from a compressed file. In that case the file is first decompressed and then mmapped.
I had told with Shian-Yow for this, this is a temporary solution before sandbox. For long term, the parent process should create files for content processes, and passing the fd to the content process. IDB is another solution, but per-app private directory is more efficient.
Let's triage this with bug 958440 tomorrow.
blocking-b2g: --- → 1.3T?
triage: 1.3T+ so we can possibly have auto-correction for Tarako
blocking-b2g: 1.3T? → 1.3T+
Whiteboard: [tarako][NO_UPLIFT]
Comment on attachment 8364959 [details] [diff] [review] Part 1: mmap ArrayBuffer to uncompressed/aligned file in zip package Review of attachment 8364959 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/base/src/nsXMLHttpRequest.cpp @@ +3981,5 @@ > + > +bool > +ArrayBufferBuilder::mapToFile(uint32_t aFd, uint32_t aSize, > + uint32_t aOffset, uint16_t aHeaderSize) > +{ The alignment strategy looks correct. Page aligned content will be page aligned in memory, with a header just before it in the previous page, and the start of the buffer elements will be page aligned. This will work well with the planned buffer.discard() feature, thank you. This might even be usable for pre-initialized asm.js heap buffers.
Attachment #8364959 - Flags: feedback?(dtc-moz) → feedback+
This is a WIP patch that tries to realize comment 70, which uses the fd from parent process and use it for mapping at child process. One problem to solve is, the current RemoteOpenFile doesn't support multiple open()s, if someone calls Clone() or OpenNSPRFileDesc() to the JAR file, others cannot open it again. http://dxr.mozilla.org/mozilla-central/source/netwerk/ipc/RemoteOpenFileChild.cpp#324 For this bug, we need to seek to right offset of target file in zip package, then do mapping. It requires at least opening twice.
Comment on attachment 8364964 [details] [diff] [review] Patch: Decompress to local file. Remove 'Part 2' from description, because writing to local file is not short term goal for this bug.
Attachment #8364964 - Attachment description: Part 2: Decompress to local file. → Patch: Decompress to local file.
Attachment #8372208 - Attachment is obsolete: true
This is a great idea, but it is very much the wrong component for this, and I think you're missing the eyes of some people who will need to be involved. Firefox OS :: General is *never* the right component for anything that's going to result in a patch, doubly so for anything that's going to result in a patch to Gecko. That said -- there are two places where we should absolutely use mmap() if we can, for both desktop and for mobile. The first is this one here XHR from these types of URLs, such as is seen here; and the second is IndexedDB blob fetches. In both cases COW can be used to get around the r/w aspect of typed arrays without blowing away the data on disk.
Component: General → DOM: Core & HTML
Product: Firefox OS → Core
This is an awesome idea, indeed! Please do add actual JS (friend, presumably) API for whatever JS engine bits you need here instead of including private JS headers...
This patch adds about:memory statistic for memory mapped array buffer. Here is an example of memory mapped dictionary "en_us.dict" of keyboard app, which counted to "js-main-runtime/compartments/objects/non-heap/elements/mapped". 5,660,266 B (100.0%) -- js-main-runtime ├──3,134,962 B (55.39%) -- compartments │ ├──2,098,510 B (37.07%) -- objects │ │ ├──1,451,390 B (25.64%) -- non-heap │ │ │ ├──1,451,390 B (25.64%) ── elements/mapped │ │ │ └──────────0 B (00.00%) ── code/asm.js │ │ ├────449,072 B (07.93%) -- gc-heap │ │ │ ├──265,248 B (04.69%) ── function │ │ │ ├──162,992 B (02.88%) ── ordinary │ │ │ └───20,832 B (00.37%) ── dense-array │ │ └────198,048 B (03.50%) -- malloc-heap │ │ ├──124,288 B (02.20%) ── slots │ │ └───73,760 B (01.30%) ── elements/non-asm.js
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #86) > This is a great idea, but it is very much the wrong component for this, and > I think you're missing the eyes of some people who will need to be involved. > Firefox OS :: General is *never* the right component for anything that's > going to result in a patch, doubly so for anything that's going to result in > a patch to Gecko. <offtopic for this bug>Firefox OS :: General is actually the right component for code in b2g/</offtopic>
(In reply to Boris Zbarsky [:bz] from comment #87) > This is an awesome idea, indeed! Please do add actual JS (friend, > presumably) API for whatever JS engine bits you need here instead of > including private JS headers... Thank you for the feedback. Will change to include "jsfriendapi.h" instead, with two friend APIs in next patch. JS_FRIEND_API(size_t) JS_GetObjectElementsHeaderSize() { return sizeof(ObjectElements); } JS_FRIEND_API(bool) JS_MapArrayBufferContents(JSContext *cx, void *data_ptr, uint32_t data_size) { js::ObjectElements *header = static_cast<ObjectElements *>(data_ptr); ArrayBufferObject::initMappedElementsHeader(header, data_size); return true; }
Comment on attachment 8373129 [details] [diff] [review] Part 2: Report mapped array buffer statistics for about:memory. Review of attachment 8373129 [details] [diff] [review]: ----------------------------------------------------------------- Kyle, would you help to feedback this this?
Attachment #8373129 - Flags: feedback?(khuey)
Comment on attachment 8373129 [details] [diff] [review] Part 2: Report mapped array buffer statistics for about:memory. Review of attachment 8373129 [details] [diff] [review]: ----------------------------------------------------------------- I think this looks good, but njn should look at it.
Attachment #8373129 - Flags: feedback?(khuey) → feedback?(n.nethercote)
This simplifies seeking file data offset in zip package.
This patch uses the fd from parent process and use it for mapping at child process, as proposed in comment 70. To solve the problem mentioned in comment 83, the dup() is used to preserve the fd for mmap().
Attachment #8372210 - Attachment is obsolete: true
Attachment #8373129 - Flags: review+
Attachment #8373129 - Flags: feedback?(n.nethercote)
Attachment #8373129 - Flags: feedback+
1. Added GetDataOffset() support for nsZipArchive. 2. Access JS engine through JS friend API. 3. Misc revises.
Attachment #8364959 - Attachment is obsolete: true
Attachment #8364959 - Flags: feedback?(thuang)
Revised patch, moving some code logic to part 1.
Attachment #8373331 - Attachment is obsolete: true
Attachment #8373335 - Attachment is obsolete: true
Comment on attachment 8373973 [details] [diff] [review] Part 1: Memory mapping array buffer to uncompressed file in zip package. Ting-Yuan, the patch should addressed your previous feedback comments except the open/mmap/munmap not delegated to channel(not yet figured out a generic way to do it in JAR channel), more feedbacks are welcome if any. Jed, could you help to feedback on remote file open or other parts if any? Thank you.
Attachment #8373973 - Flags: feedback?(thuang)
Attachment #8373973 - Flags: feedback?(jld)
Attachment #8373979 - Flags: feedback?(jld)
Basic on testing with Latin input method using "en_us.dict" dictionary, the about:memory shows that array buffer of XHR in main thread is memory mapped (1,451,390 bytes in objects/non-heap/elements/mapped). However, it apears that the we also need to take care of array buffer XHR in worker thread (1,555,952 bytes in objects/malloc-heap/elements/non-asm.js), which is not yet addressed by current patch. 13,899,134 B (100.0%) -- explicit ├───4,508,902 B (32.44%) -- js-non-window │ ├──3,229,878 B (23.24%) -- zones │ │ ├──2,471,298 B (17.78%) -- zone(0x4038cc00) │ │ │ ├──1,480,154 B (10.65%) -- compartment(moz-nullprincipal:{02d6e6b5-3e64-444f-8277-611367d623f4}) │ │ │ │ ├──1,451,390 B (10.44%) -- objects/non-heap │ │ │ │ │ ├──1,451,390 B (10.44%) ── elements/mapped │ │ │ │ │ └──────────0 B (00.00%) ── code/asm.js │ │ │ │ └─────28,764 B (00.21%) -- sundries │ │ │ │ ├──16,552 B (00.12%) ── gc-heap │ │ │ │ └──12,212 B (00.09%) ── malloc-heap . . . ├───2,568,168 B (18.48%) -- workers/workers(gaiamobile.org)/worker(js/imes/latin/worker.js, 0x403a4800) │ ├──1,702,116 B (12.25%) -- zone(0x40390000) │ │ ├──1,634,112 B (11.76%) -- compartment(web-worker) │ │ │ ├──1,555,952 B (11.19%) -- objects │ │ │ │ ├──1,536,896 B (11.06%) -- malloc-heap │ │ │ │ │ ├──1,526,784 B (10.98%) ── elements/non-asm.js │ │ │ │ │ └─────10,112 B (00.07%) ── slots │ │ │ │ ├─────19,056 B (00.14%) ── gc-heap/function │ │ │ │ └──────────0 B (00.00%) ── non-heap/code/asm.js
Note: you should probably have sfink r? the changes to js/src/vm/TypedArrayObject.
sfink, could you help to review on changes to js/src/vm/TypedArrayObject?
Attachment #8373973 - Attachment is obsolete: true
Attachment #8373973 - Flags: feedback?(thuang)
Attachment #8373973 - Flags: feedback?(jld)
Attachment #8375242 - Flags: review?(sphink)
Attachment #8375242 - Flags: feedback?(thuang)
Attachment #8375242 - Flags: feedback?(jld)
Comment on attachment 8375242 [details] [diff] [review] Part 1: Memory mapping array buffer to uncompressed file in zip package. I don't know the code involved well enough to fully review it, but in terms of resource access in the content process this looks good.
Attachment #8375242 - Flags: feedback?(jld) → feedback+
Attachment #8373979 - Flags: feedback?(jld) → feedback+
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
1. Fixed array buffer of XHR in worker not correctly mapped array buffer type, as stated in comment 70. 2. Moved mmap() handling from XHR to JS engine. 3. Re-organized patch for better review. All JS engine related change are moved to part 1.
Attachment #8375242 - Attachment is obsolete: true
Attachment #8375242 - Flags: review?(sphink)
Attachment #8375242 - Flags: feedback?(thuang)
Attachment #8376137 - Flags: review?(sphink)
Attachment #8373979 - Attachment description: Part 3: Mapping with file descriptor from remote file open. → Part 4: Mapping with file descriptor from remote file open.
Attachment #8376140 - Flags: feedback?(thuang)
(In reply to Shian-Yow Wu [:shianyow] from comment #102) > Created attachment 8376137 [details] [diff] [review] > Part 1: Support mapped array buffer type. > > 1. Fixed array buffer of XHR in worker not correctly mapped array buffer > type, as stated in comment 70. Sorry, it is comment 98.
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Minor change to previous patch on API name and parameters.
Attachment #8376137 - Attachment is obsolete: true
Attachment #8376137 - Flags: review?(sphink)
Attachment #8376169 - Flags: review?(sphink)
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Refactor: change JS_UNLIKELY() to MOZ_UNLIKELY().
Attachment #8376169 - Attachment is obsolete: true
Attachment #8376169 - Flags: review?(sphink)
Attachment #8377043 - Flags: review?(sphink)
Attachment #8373129 - Attachment is obsolete: true
(In reply to Shian-Yow Wu [:shianyow] from comment #107) > Created attachment 8377045 [details] [diff] [review] > Part 2: Report mapped array buffer statistics for about:memory. (r=n.nethercote) Refactor: change JS_UNLIKELY() to MOZ_UNLIKELY().
Refactor.
Attachment #8376140 - Attachment is obsolete: true
Attachment #8376140 - Flags: feedback?(thuang)
Attachment #8377462 - Flags: feedback?(thuang)
Comment on attachment 8377043 [details] [diff] [review] Part 1: Support mapped array buffer type. Review of attachment 8377043 [details] [diff] [review]: ----------------------------------------------------------------- I'd like to see this again with the platform-dependent mapping code moved and integrated into gc/Memory.cpp. Also, this really really needs some sort of tests. Maybe those are in another one of the patches? I haven't looked. ::: js/src/vm/TypedArrayObject.cpp @@ +578,5 @@ > + uint32_t align_min = 8; // Minimal alignment size > + uint32_t pa_start; // Page aligned starting > + uint32_t pa_end; // Page aligned ending > + uint32_t pa_size; // Total page aligned size > + uint32_t page_size = sysconf(_SC_PAGE_SIZE); // Page size Sorry to be a nuisance, but I'd really rather not have this platform-dependent code here. Can you move this stuff into gc/Memory.cpp and add what interfaces you need to gc/Memory.h? I notice that this code uses _SC_PAGE_SIZE and gc/Memory.cpp uses _SC_PAGESIZE. I guess they're synonymous? @@ +593,5 @@ > + // Check for minimal alignment requirement. > +#if NEED_PAGE_ALIGNED > + align_min = std::max(align_min, page_size); > +#endif > + if ((aOffset & ~(align_min - 1)) != aOffset) { I would expect if (aOffset & (align_min - 1)) { or if (aOffset & (align_min - 1) != 0) { but maybe that's just me. @@ +599,5 @@ > + return nullptr; > + } > + > + // Make sure not to display bytes past end of file. > + if(aOffset + aLength > (uint32_t) st.st_size) { I may be getting this wrong, but doesn't this fail on overflow? eg if aOffset == aLength == 2^31, this conditional would be false? @@ +605,5 @@ > + } > + > + // Page aligned starting of the offset. > + pa_start = aOffset & ~(page_size - 1); > + // Calcualte page aligned ending by adding one page to the page aligned *Calculate @@ +634,5 @@ > + buf = (uint8_t *) mmap(buf, pa_size, PROT_READ | PROT_WRITE, > + MAP_PRIVATE | MAP_FIXED, aFd, pa_start); > + } > + close(aFd); > + if (buf == MAP_FAILED) { SM style nit: no { } here @@ +661,5 @@ > + > + ObjectElements *header = buffer.getElementsHeader(); > + if (header) { > +#ifdef XP_WIN > + // To be implemented. I'm a little uncomfortable with the platform-dependence of this (ie, not having it available on Windows.) But I guess I won't block it for that. I *would* like the interface to allow for the possibility of cross-platform support. So eg instead of a uint32_t file descriptor parameter, typedef it to a more generic name and use uint32_t on XP_UNIX, perhaps HANDLE on Windows? (I'm guessing from http://msdn.microsoft.com/en-us/library/aa366537%28v=vs.85%29.aspx ; I don't actually know how to do it there.)
Attachment #8377043 - Flags: review?(sphink)
Attached patch (WIP) Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
WIP patch with following change: 1. Platform-dependent mapping code moved to gc/Memory.cpp. 2. Other review comments. 3. Removed changes on StructuredClone.cpp in previous patch, which found problem during testing. TODO: 1. Test cases. 2. Avoid heap memory been allocated when mapped array buffer object been cloned. 3. Other platforms (i.e. Windows) support, probably in a follow up bug. Note: The issue mentioned in comment 98 was actually caused by the mapped array buffer been cloned at: http://dxr.mozilla.org/mozilla-central/source/dom/workers/XMLHttpRequest.cpp#1177 The previous patch tries to solve it by setting the mapped attribute in the cloned array buffer object. However, that approach doesn't work because the cloned object actually allocated new heap memory. So we need to find other way to fixed it when mapped array buffer object been cloned. Probably we should pass the pointer of mapped array buffer object to new object, instead of cloning it?
Flags: needinfo?(sphink)
Attached patch (WIP) Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
WIP Patch with some test cases added. I can't find a good way to make sure the munmap() is correctly performed, does anyone have suggestions?
Attachment #8377043 - Attachment is obsolete: true
Attachment #8378945 - Attachment is obsolete: true
(In reply to Shian-Yow Wu [:shianyow] from comment #112) > I can't find a good way to make sure the munmap() is correctly performed, > does anyone have suggestions? I don't think it's critical to check. But would it work to modify the contents of the file and then make sure the data isn't still the same as it was? It's still probably too hard to test. The finalization depends on exact rooting, for one (the compiler might keep a copy in a register even though you're nulling out the actual stack location). And that memory could be recycled, so it's extremely unlikely but possible for it to either contain the test pattern coincidentally or be freed and mprotected and cause a seg fault. I guess this would use up address space, so you could do a test that creates a discards a massive number of these to try to get an OOM if the munmap isn't done. But I don't think it's worth it; your existing test is good enough.
Flags: needinfo?(sphink)
(In reply to Steve Fink [:sfink] from comment #113) > (In reply to Shian-Yow Wu [:shianyow] from comment #112) > > I can't find a good way to make sure the munmap() is correctly performed, > > does anyone have suggestions? > > I don't think it's critical to check. But would it work to modify the > contents of the file and then make sure the data isn't still the same as it > was? > > It's still probably too hard to test. The finalization depends on exact > rooting, for one (the compiler might keep a copy in a register even though > you're nulling out the actual stack location). And that memory could be > recycled, so it's extremely unlikely but possible for it to either contain > the test pattern coincidentally or be freed and mprotected and cause a seg > fault. > > I guess this would use up address space, so you could do a test that creates > a discards a massive number of these to try to get an OOM if the munmap > isn't done. But I don't think it's worth it; your existing test is good > enough. Thanks for your comments. Then I will only do the GC and make sure at least it won't crash when munmap() called.
(In reply to Shian-Yow Wu [:shianyow] from comment #111) > TODO: > 1. Test cases. > 2. Avoid heap memory been allocated when mapped array buffer object been > cloned. > 3. Other platforms (i.e. Windows) support, probably in a follow up bug. > > Note: > The issue mentioned in comment 98 was actually caused by the mapped array > buffer been cloned at: > http://dxr.mozilla.org/mozilla-central/source/dom/workers/XMLHttpRequest. > cpp#1177 > > The previous patch tries to solve it by setting the mapped attribute in the > cloned array buffer object. However, that approach doesn't work because the > cloned object actually allocated new heap memory. So we need to find other > way to fixed it when mapped array buffer object been cloned. > > Probably we should pass the pointer of mapped array buffer object to new > object, instead of cloning it? Per discussion with Thinker today, we can mmap() twice on same fd, and get different mapped address. We can use this way to clone a mapped array buffer, by mmap() on same fd and unmap them seperately when been GCed. The advantage of this way is, we don't need to worry about the reference count issue. Working on it this way.
Attached patch (WIP) Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
WIP patch: 1. Support cloning mapped array buffer. 2. Test cases. 3. Misc refinement. Will do more testing before asking review.
Attachment #8379719 - Attachment is obsolete: true
Attached patch (WIP) Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Changes in this patch: 1. Refactored. 2. Fixed some errors on try server. On try server Linux x64, the 2 of 4 SpiderMonkey Tests would core dump. https://tbpl.mozilla.org/?tree=Try&rev=0fc6dd9c8682 The relevant error in full log shows: *** glibc detected *** /builds/slave/try_l64-d_sm-ggc-0000000000000/objdir/dist/bin/jsapi-tests: free(): invalid size: 0x00007f6247862ff0 *** Hi Steve, do you have any ideas with this error?
Attachment #8383163 - Attachment is obsolete: true
Flags: needinfo?(sphink)
(In reply to Shian-Yow Wu [:shianyow] from comment #117) > On try server Linux x64, the 2 of 4 SpiderMonkey Tests would core dump. > https://tbpl.mozilla.org/?tree=Try&rev=0fc6dd9c8682 > > The relevant error in full log shows: > > *** glibc detected *** > /builds/slave/try_l64-d_sm-ggc-0000000000000/objdir/dist/bin/jsapi-tests: > free(): invalid size: 0x00007f6247862ff0 *** The error occured right after doing GC in the test case.
(In reply to Shian-Yow Wu [:shianyow] from comment #117) > Created attachment 8383555 [details] [diff] [review] > (WIP) Part 1: Support mapped array buffer type. > > Changes in this patch: > 1. Refactored. > 2. Fixed some errors on try server. > > On try server Linux x64, the 2 of 4 SpiderMonkey Tests would core dump. > https://tbpl.mozilla.org/?tree=Try&rev=0fc6dd9c8682 > > The relevant error in full log shows: > > *** glibc detected *** > /builds/slave/try_l64-d_sm-ggc-0000000000000/objdir/dist/bin/jsapi-tests: > free(): invalid size: 0x00007f6247862ff0 *** > > Hi Steve, do you have any ideas with this error? Usually that means you are calling free() on a pointer that did not come from malloc().
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #119) > (In reply to Shian-Yow Wu [:shianyow] from comment #117) > > Created attachment 8383555 [details] [diff] [review] > > (WIP) Part 1: Support mapped array buffer type. > > > > Changes in this patch: > > 1. Refactored. > > 2. Fixed some errors on try server. > > > > On try server Linux x64, the 2 of 4 SpiderMonkey Tests would core dump. > > https://tbpl.mozilla.org/?tree=Try&rev=0fc6dd9c8682 > > > > The relevant error in full log shows: > > > > *** glibc detected *** > > /builds/slave/try_l64-d_sm-ggc-0000000000000/objdir/dist/bin/jsapi-tests: > > free(): invalid size: 0x00007f6247862ff0 *** > > > > Hi Steve, do you have any ideas with this error? > > Usually that means you are calling free() on a pointer that did not come > from malloc(). Thank you Kyle, it's helpful. After using same options as try server, the issue can be reproduced locally. The Linux x64 Debug build has "--enable-gcgenerational" enabled, which enables JSGC_GENERATIONAL. When js::MinorGC() called, js::Nursery tries to free the mapped address by the default free handler, which caused this error. http://dxr.mozilla.org/mozilla-central/source/js/src/gc/Nursery.cpp#883 Probably there are two solutions: 1. When the allocated object is memory mapped, don't add it to js::Nursery, so MinorGC() won't free the mapped address, and we repy on full GC to unmap it. 2. When MinorGC(), make sure that js::Nursery uses free handler provided by the JSObject. #1 might be easier and #2 should be better. Does someone have comments?
Flags: needinfo?(sphink)
Terrence, can you take a look at comment 120?
Flags: needinfo?(terrence)
Actually, I think #1 would be better: any object that needs to make syscalls to allocate/free should have a long enough life to get tenured anyway. Nursery allocation would be waste of time and space here. Secondly, the nursery is not meant to be used with anything that requires special work to finalize. Ideally we wouldn't need to do any work to sweep other that resetting the allocation pointer to the start of the nursery. We do make a special exception for objects with external slots/elements because they are so pervasive, but this is hardly ideal. We should definitely not be expanding this machinery. The attached patch implements this and should solve your TBPL issue.
Attachment #8384748 - Flags: review?(sphink)
Flags: needinfo?(terrence)
Comment on attachment 8384748 [details] [diff] [review] api_abo_contents_nonnursery-v0.diff Review of attachment 8384748 [details] [diff] [review]: ----------------------------------------------------------------- Yes, this seems like the right thing.
Attachment #8384748 - Flags: review?(sphink) → review+
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Hi Steve, the patch is ready for review. Could you help to review it again? Changes from previous reivew: 1. Addressed review comments. 2. Fixed cloning array buffer issue. 3. Test cases. 4. Fixed some test failures on try server. 5. Misc refinements. Note: - Windows platform not yet supported, will leave it as follow up bug.
Attachment #8383555 - Attachment is obsolete: true
Attachment #8385108 - Flags: review?(sphink)
(In reply to Steve Fink [:sfink] from comment #110) > I'm a little uncomfortable with the platform-dependence of this (ie, not > having it available on Windows.) But I guess I won't block it for that. I > *would* like the interface to allow for the possibility of cross-platform > support. So eg instead of a uint32_t file descriptor parameter, typedef it > to a more generic name and use uint32_t on XP_UNIX, perhaps HANDLE on > Windows? (I'm guessing from > http://msdn.microsoft.com/en-us/library/aa366537%28v=vs.85%29.aspx ; I don't > actually know how to do it there.) On Windows platform, we can use "(HANDLE)_get_osfhandle(fd)" to get HANDLE from int fd[1]. In js/src/shell/js.cpp, it uses this way and defines ScopedFileDesc class as fd type[2]. As my understanding it also serves the purpose of handling jsCacheOpen setting. For our case, we should just need int as generic name for file descriptor interface. Please advice if you suggest a special type like ScopedFiledDesc is needed. [1] http://msdn.microsoft.com/en-us/library/ks2530z6%28v=VS.100%29.aspx [2] http://dxr.mozilla.org/mozilla-central/source/js/src/shell/js.cpp#5276
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Attachment #8385108 - Attachment is obsolete: true
Attachment #8385108 - Flags: review?(sphink)
Attachment #8385125 - Flags: review?(sphink)
Comment on attachment 8385125 [details] [diff] [review] Part 1: Support mapped array buffer type. Review of attachment 8385125 [details] [diff] [review]: ----------------------------------------------------------------- I'm not done reviewing all of this, but I want to publish my comments so far (and there are enough things that I want to see change that I'm clearing the review request for now, though I'll still try to get back to it unless you post another patch first). I'm still wondering about whether we need to make an abstraction over fds and HANDLEs (or whatever they are) now, or if that can wait until the Windows implementation. It won't be that big of a deal to update callers, so it probably doesn't matter that much. I basically like the patch. comments: I think you need to add a case in StructuredClone.cpp to Discard() (or currently, DiscardEntry, though it'll be changing soon) to release these if there is an error during deserialization. (Or if a clone buffer never manages to get read, as sometimes happens.) ::: js/src/jsapi.h @@ +3176,5 @@ > JS_ReallocateArrayBufferContents(JSContext *cx, uint32_t nbytes, void **contents, uint8_t **data); > > +/* > + * Create memory mapped array buffer contents from the file descriptor, > + * offset in the file, and length for mapping. This comment just repeats the function signature. I'd delete everything starting at "from the file descriptor". Though it seems like there ought to be something else that would be useful to users; it's not like this is simple and straightforward. @@ +3186,5 @@ > + * Create memory mapped array buffer object from the file descriptor, > + * offset in the file, and length for mapping. > + */ > +extern JS_PUBLIC_API(JSObject *) > +JS_CreateMappedArrayBuffer(JSContext *cx, int fd, size_t offset, size_t length); Why is this function needed? I'd rather just have the caller use JS_CreateMappedArrayBufferContents followed by JS_NewArrayBufferWithContents. ::: js/src/jsobjinlines.h @@ +571,5 @@ > js::ObjectElements *elements = getElementsHeader(); > if (MOZ_UNLIKELY(elements->isAsmJSArrayBuffer())) > js::ArrayBufferObject::releaseAsmJSArrayBuffer(fop, this); > + else if (MOZ_UNLIKELY(elements->isMappedArrayBuffer())) > + js::ArrayBufferObject::releaseMappedArrayBuffer(fop, this); We're now relying on the GC to free up a non-memory resource (file descriptors), which is unfortunate. For now, I'll just hope that there aren't many of these created. We really ought to have a way to explicitly release these, though. Can you make JS_NeuterArrayBuffer call releaseMappedArrayBuffer too? (And handle already-neutered ArrayBuffers too.) ::: js/src/vm/ArrayBufferObject.cpp @@ +664,5 @@ > + JS_ASSERT(buffer.isMappedArrayBuffer()); > + > + ObjectElements *header = buffer.getElementsHeader(); > + if (header) { > + DeallocateMappedObject(buffer.getMappingFD(), header, header->initializedLength); Handle the case where the ArrayBuffer has been neutered and its fd cleared already. ::: js/src/vm/StructuredClone.cpp @@ +835,5 @@ > ArrayBufferObject &buffer = obj->as<ArrayBufferObject>(); > + > + if (buffer.isMappedArrayBuffer()) { > + return out.writePair(SCTAG_MAPPED_ARRAY_BUFFER_OBJECT, buffer.byteLength()) && > + out.writePair(buffer.getMappingFD(), buffer.getMappingOffset()); This reminds me that we probably ought to be writing out all these sizes and offsets as 64 bits, because I think we want to allow these to be bigger. But don't worry about that for this patch; your stuff is fitting in with what's already there.
Attachment #8385125 - Flags: review?(sphink)
(In reply to Shian-Yow Wu [:shianyow] from comment #126) > On Windows platform, we can use "(HANDLE)_get_osfhandle(fd)" to get HANDLE > from int fd[1]. Oh, sorry. I hadn't ready this comment earlier. If that's the case, I'm fine with using an int fd in the interface. Ignore my previous comments about that.
(In reply to Steve Fink [:sfink] from comment #129) > Comment on attachment 8385125 [details] [diff] [review] > Part 1: Support mapped array buffer type. > > Review of attachment 8385125 [details] [diff] [review]: > ----------------------------------------------------------------- > > I'm not done reviewing all of this, but I want to publish my comments so far > (and there are enough things that I want to see change that I'm clearing the > review request for now, though I'll still try to get back to it unless you > post another patch first). I'm still wondering about whether we need to make > an abstraction over fds and HANDLEs (or whatever they are) now, or if that > can wait until the Windows implementation. It won't be that big of a deal to > update callers, so it probably doesn't matter that much. > > I basically like the patch. Thanks for your early review comments. > > comments: > > I think you need to add a case in StructuredClone.cpp to Discard() (or > currently, DiscardEntry, though it'll be changing soon) to release these if > there is an error during deserialization. (Or if a clone buffer never > manages to get read, as sometimes happens.) OK, will add it. > > ::: js/src/jsapi.h > @@ +3176,5 @@ > > JS_ReallocateArrayBufferContents(JSContext *cx, uint32_t nbytes, void **contents, uint8_t **data); > > > > +/* > > + * Create memory mapped array buffer contents from the file descriptor, > > + * offset in the file, and length for mapping. > > This comment just repeats the function signature. I'd delete everything > starting at "from the file descriptor". Though it seems like there ought to > be something else that would be useful to users; it's not like this is > simple and straightforward. Got it, will make the comment useful to users. > > @@ +3186,5 @@ > > + * Create memory mapped array buffer object from the file descriptor, > > + * offset in the file, and length for mapping. > > + */ > > +extern JS_PUBLIC_API(JSObject *) > > +JS_CreateMappedArrayBuffer(JSContext *cx, int fd, size_t offset, size_t length); > > Why is this function needed? I'd rather just have the caller use > JS_CreateMappedArrayBufferContents followed by JS_NewArrayBufferWithContents. OK, will remove it. > > ::: js/src/jsobjinlines.h > @@ +571,5 @@ > > js::ObjectElements *elements = getElementsHeader(); > > if (MOZ_UNLIKELY(elements->isAsmJSArrayBuffer())) > > js::ArrayBufferObject::releaseAsmJSArrayBuffer(fop, this); > > + else if (MOZ_UNLIKELY(elements->isMappedArrayBuffer())) > > + js::ArrayBufferObject::releaseMappedArrayBuffer(fop, this); > > We're now relying on the GC to free up a non-memory resource (file > descriptors), which is unfortunate. For now, I'll just hope that there > aren't many of these created. We really ought to have a way to explicitly > release these, though. Can you make JS_NeuterArrayBuffer call > releaseMappedArrayBuffer too? (And handle already-neutered ArrayBuffers too.) Yes, will do. > > ::: js/src/vm/ArrayBufferObject.cpp > @@ +664,5 @@ > > + JS_ASSERT(buffer.isMappedArrayBuffer()); > > + > > + ObjectElements *header = buffer.getElementsHeader(); > > + if (header) { > > + DeallocateMappedObject(buffer.getMappingFD(), header, header->initializedLength); > > Handle the case where the ArrayBuffer has been neutered and its fd cleared > already. OK, will do. > > ::: js/src/vm/StructuredClone.cpp > @@ +835,5 @@ > > ArrayBufferObject &buffer = obj->as<ArrayBufferObject>(); > > + > > + if (buffer.isMappedArrayBuffer()) { > > + return out.writePair(SCTAG_MAPPED_ARRAY_BUFFER_OBJECT, buffer.byteLength()) && > > + out.writePair(buffer.getMappingFD(), buffer.getMappingOffset()); > > This reminds me that we probably ought to be writing out all these sizes and > offsets as 64 bits, because I think we want to allow these to be bigger. But > don't worry about that for this patch; your stuff is fitting in with what's > already there. OK. We may create a follow up bug to support mapping to files larger than 4GB.
WIP patch: 1. Addressed some review comments. 2. Changed JS_CreateMappedArrayBufferContents() not to close fd after mapping. So that the fd of a mapped array buffer won't be closed by the cloned one. TODO: 1. Add the case to release mapped array buffer in Discard() of StructuredClone.cpp 2. In test case, sometimes obj=nullptr and GC() doesn't trigger JSObject::finish() immediately. This needs to be checked.
(In reply to Steve Fink [:sfink] from comment #129) > I think you need to add a case in StructuredClone.cpp to Discard() (or > currently, DiscardEntry, though it'll be changing soon) to release these if > there is an error during deserialization. (Or if a clone buffer never > manages to get read, as sometimes happens.) The memory mapping of array buffer will not be performed until someone calls JSAutoStructuredCloneBuffer::read(). So it should be safe before the clone buffer get read. Once it gets read, it's immediately passed to the new js value. Suppose the mapped array buffer will be released when the js value set to nullptr and GCed. I made a test case TestDiscardClonedObject() in WIP v2, but I cannot find a way to trigger an error case. May I have your advice on it?
Flags: needinfo?(sphink)
(In reply to Shian-Yow Wu [:shianyow] from comment #133) > (In reply to Steve Fink [:sfink] from comment #129) > > I think you need to add a case in StructuredClone.cpp to Discard() (or > > currently, DiscardEntry, though it'll be changing soon) to release these if > > there is an error during deserialization. (Or if a clone buffer never > > manages to get read, as sometimes happens.) > > The memory mapping of array buffer will not be performed until someone calls > JSAutoStructuredCloneBuffer::read(). So it should be safe before the clone > buffer get read. Oh! You are right. The usual problem is from errors during ::write(), because that can invoke arbitrary JS code, but you're safe there. > Once it gets read, it's immediately passed to the new js > value. Suppose the mapped array buffer will be released when the js value > set to nullptr and GCed. I made a test case TestDiscardClonedObject() in > WIP v2, but I cannot find a way to trigger an error case. May I have your > advice on it? Hm. The ::read() path is definitely way safer, since the newly created objects are normally all well-behaved. I see two ways to trigger an error. In the shell, I would use oomAfterAllocations() to trigger an out of memory error in the middle of the read. But your stuff won't be available in the shell. I'm not sure what runs with both those testing functions defined and your custom stuff defined. dxr shows lines like var jsFuns = SpecialPowers.Cu.getJSTestingFunctions(); which I would think would have jsFuns.oomAfterAllocations(). Would that help? Alternatively, you could use a custom read callback that returns an error. I'm not sure how annoying that would be to do; it would be additional C++ test code. And now, to throw a wrench in the works: bhackett kind of came out of the blue and is working on bug 979480, which would make this stuff much, much easier. I'm not sure if you'd need to modify ArrayBuffer code at all. I also don't know when it might land. (Sorry! I didn't know it was coming. Nor do I know who should land first.)
Flags: needinfo?(sphink)
Attached patch Part 1(v3): Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
This patch changes: 1. Revised API JS_CreateMappedArrayBufferContents(). 2. New API JS_ReleaseMappedArrayBufferContents(). 3. Release resource used by mapping when creating object failed in JSStructuredCloneReader. 4. Release resource used by mapping in JS_NetuerArrayBuffer(). 5. Revised test cases.
Attachment #8385125 - Attachment is obsolete: true
Attachment #8386077 - Attachment is obsolete: true
Attachment #8386653 - Flags: review?(sphink)
(In reply to Steve Fink [:sfink] from comment #134) > Hm. The ::read() path is definitely way safer, since the newly created > objects are normally all well-behaved. I found another path to release the resource when object creation failed. Please see if it's fine. > > > And now, to throw a wrench in the works: bhackett kind of came out of the > blue and is working on bug 979480, which would make this stuff much, much > easier. I'm not sure if you'd need to modify ArrayBuffer code at all. I also > don't know when it might land. (Sorry! I didn't know it was coming. Nor do I > know who should land first.) Considering the 1.3T schedule, and 1.3T has its own branch, a possible way could be land current patch first, and refactor on bug 979480 later.
This patch changes: 1. Revise based on new API of part 1. 2. Other refinements. TODO: 1. Test cases on XHR part.
Attachment #8364964 - Attachment is obsolete: true
Attachment #8377462 - Attachment is obsolete: true
Attachment #8377462 - Flags: feedback?(thuang)
Comment on attachment 8386653 [details] [diff] [review] Part 1(v3): Support mapped array buffer type. Review of attachment 8386653 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/gc/Memory.h @@ +44,5 @@ > +// Allocate mapped memory for object from file descriptor, offset and length > +// of the file. > +// The new_fd is duplicated from original fd, for the purpose of cloned object. > +// The offset must be aligned according to alignment requirement. > +// Addition page might be allocated depending on offset and header size given. s/Addition/An additional/ ::: js/src/jsapi-tests/testMappedArrayBuffer.cpp @@ +42,5 @@ > + > + // Ensure that fd is closed after object been GCed. > + // Check the fd returned from object created in a function, > + // then do the GC, in order to guarantee the object is freed when > + // exact rooting is not on. I would be even more conservative here, in case the compiler decides to inline everything or something. Make this portion of the test #ifdef JSGC_USE_EXACT_ROOTING @@ +65,5 @@ > + void *ptr; > + int new_fd; > + if(!JS_CreateMappedArrayBufferContents(fd, &new_fd, offset, length, &ptr)) > + return nullptr; > + JS::RootedObject obj(cx, JS_NewArrayBufferWithContents(cx, ptr)); I would prefer this one to just be JSObject*, because there's so little between it and the return. The Rooted makes me look for something tricky going on. ::: js/src/jsapi.h @@ +3178,5 @@ > +/* > + * Create memory mapped array buffer contents. > + * For cloning, the fd will not be closed after mapping, and the caller must > + * take care of closing fd after calling this function. > + * A new duplicated fd used by mapping is returned in new_fd. /by mapping/by the mapping/ @@ +3189,5 @@ > + * Release the allocated resource of mapped array buffer contents before the > + * object is created. > + * If a new object has been created by JS_NewArrayBufferWithContents() with > + * this content, then JS_NeuterArrayBuffer() should be used instead to release > + * the resource used by the object. Nice comment, thanks. ::: js/src/vm/ArrayBufferObject.cpp @@ +668,5 @@ > + > + ObjectElements *header = buffer.getElementsHeader(); > + if (header) { > + DeallocateMappedObject(buffer.getMappingFD(), header, header->initializedLength); > + } For spidermonkey style, eliminate the { } around a single-line statement. ::: js/src/vm/ArrayBufferObject.h @@ +18,5 @@ > > class ArrayBufferViewObject; > > +// Header for mapped array buffer > +struct MappingInfoHeader Ugh. Another header, and it's bigger than the available inline space in an object. (Which doesn't matter, because these will never be inlined.) Fortunately, this will go away soon with bhackett's patch. @@ +179,5 @@ > + mh->offset = offset; > + } > + > + static MappingInfoHeader *getMappingInfoHeader(js::ObjectElements *header) { > + return reinterpret_cast<MappingInfoHeader *>(uintptr_t(header) - can you MOZ_ASSERT here that isMappedArrayBuffer()? @@ +184,5 @@ > + sizeof(MappingInfoHeader)); > + } > + > + uint32_t getMappingFD() { > + MappingInfoHeader *mh = getMappingInfoHeader(getElementsHeader()); and here @@ +189,5 @@ > + return mh->fd; > + } > + > + uint32_t getMappingOffset() const { > + MappingInfoHeader *mh = getMappingInfoHeader(getElementsHeader()); and here ::: js/src/vm/StructuredClone.cpp @@ +1242,5 @@ > +{ > + void *ptr; > + int new_fd; > + if(!JS_CreateMappedArrayBufferContents(fd, &new_fd, offset, length, &ptr)) > + return false; This failure path needs to set an exception on the cx. I think something like JS_ReportError(context(), "Failed to create mapped array buffer contents"); should be fine.
Attachment #8386653 - Flags: review?(sphink) → review+
(In reply to Steve Fink [:sfink] from comment #138) > Comment on attachment 8386653 [details] [diff] [review] > Part 1(v3): Support mapped array buffer type. > > Review of attachment 8386653 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: js/src/gc/Memory.h > @@ +44,5 @@ > > +// Allocate mapped memory for object from file descriptor, offset and length > > +// of the file. > > +// The new_fd is duplicated from original fd, for the purpose of cloned object. > > +// The offset must be aligned according to alignment requirement. > > +// Addition page might be allocated depending on offset and header size given. > > s/Addition/An additional/ OK, thanks. > > ::: js/src/jsapi-tests/testMappedArrayBuffer.cpp > @@ +42,5 @@ > > + > > + // Ensure that fd is closed after object been GCed. > > + // Check the fd returned from object created in a function, > > + // then do the GC, in order to guarantee the object is freed when > > + // exact rooting is not on. > > I would be even more conservative here, in case the compiler decides to > inline everything or something. Make this portion of the test > > #ifdef JSGC_USE_EXACT_ROOTING Thanks for this way guarantee it's safe. > > @@ +65,5 @@ > > + void *ptr; > > + int new_fd; > > + if(!JS_CreateMappedArrayBufferContents(fd, &new_fd, offset, length, &ptr)) > > + return nullptr; > > + JS::RootedObject obj(cx, JS_NewArrayBufferWithContents(cx, ptr)); > > I would prefer this one to just be JSObject*, because there's so little > between it and the return. The Rooted makes me look for something tricky > going on. Got it. > > ::: js/src/jsapi.h > @@ +3178,5 @@ > > +/* > > + * Create memory mapped array buffer contents. > > + * For cloning, the fd will not be closed after mapping, and the caller must > > + * take care of closing fd after calling this function. > > + * A new duplicated fd used by mapping is returned in new_fd. > > /by mapping/by the mapping/ Got it. > > @@ +3189,5 @@ > > + * Release the allocated resource of mapped array buffer contents before the > > + * object is created. > > + * If a new object has been created by JS_NewArrayBufferWithContents() with > > + * this content, then JS_NeuterArrayBuffer() should be used instead to release > > + * the resource used by the object. > > Nice comment, thanks. Thank you. > > ::: js/src/vm/ArrayBufferObject.cpp > @@ +668,5 @@ > > + > > + ObjectElements *header = buffer.getElementsHeader(); > > + if (header) { > > + DeallocateMappedObject(buffer.getMappingFD(), header, header->initializedLength); > > + } > > For spidermonkey style, eliminate the { } around a single-line statement. I missed this one, thank you. > > ::: js/src/vm/ArrayBufferObject.h > @@ +18,5 @@ > > > > class ArrayBufferViewObject; > > > > +// Header for mapped array buffer > > +struct MappingInfoHeader > > Ugh. Another header, and it's bigger than the available inline space in an > object. (Which doesn't matter, because these will never be inlined.) > Fortunately, this will go away soon with bhackett's patch. OK. > > @@ +179,5 @@ > > + mh->offset = offset; > > + } > > + > > + static MappingInfoHeader *getMappingInfoHeader(js::ObjectElements *header) { > > + return reinterpret_cast<MappingInfoHeader *>(uintptr_t(header) - > > can you MOZ_ASSERT here that isMappedArrayBuffer()? Got it. I saw both MOZ_ASSERT and JS_ASSERT are used in js/src/, when should we choose which to use? > ::: js/src/vm/StructuredClone.cpp > @@ +1242,5 @@ > > +{ > > + void *ptr; > > + int new_fd; > > + if(!JS_CreateMappedArrayBufferContents(fd, &new_fd, offset, length, &ptr)) > > + return false; > > This failure path needs to set an exception on the cx. I think something like > > JS_ReportError(context(), "Failed to create mapped array buffer > contents"); > > should be fine. Got it, thanks.
(In reply to Shian-Yow Wu [:shianyow] from comment #139) > > @@ +179,5 @@ > > > + mh->offset = offset; > > > + } > > > + > > > + static MappingInfoHeader *getMappingInfoHeader(js::ObjectElements *header) { > > > + return reinterpret_cast<MappingInfoHeader *>(uintptr_t(header) - > > > > can you MOZ_ASSERT here that isMappedArrayBuffer()? > > Got it. > I saw both MOZ_ASSERT and JS_ASSERT are used in js/src/, when should we > choose which to use? Good question. I'm not sure what the answer is. I feel like we're slowly transitioning from JS_ASSERT to MOZ_ASSERT, so I use MOZ_ASSERT in new code or when I want to specify a message, and JS_ASSERT when there are others nearby that I want to fit in with. (MOZ_ASSERT is newer, so originally everything was JS_ASSERT.)
(In reply to Steve Fink [:sfink] from comment #140) > (In reply to Shian-Yow Wu [:shianyow] from comment #139) > > > @@ +179,5 @@ > > > > + mh->offset = offset; > > > > + } > > > > + > > > > + static MappingInfoHeader *getMappingInfoHeader(js::ObjectElements *header) { > > > > + return reinterpret_cast<MappingInfoHeader *>(uintptr_t(header) - > > > > > > can you MOZ_ASSERT here that isMappedArrayBuffer()? > > > > Got it. > > I saw both MOZ_ASSERT and JS_ASSERT are used in js/src/, when should we > > choose which to use? > > Good question. I'm not sure what the answer is. I feel like we're slowly > transitioning from JS_ASSERT to MOZ_ASSERT, so I use MOZ_ASSERT in new code > or when I want to specify a message, and JS_ASSERT when there are others > nearby that I want to fit in with. (MOZ_ASSERT is newer, so originally > everything was JS_ASSERT.) I see, thanks for your explanation. And thanks again for your review.
Hi njn, The text description has been modified to fit the style in latest mc. May I ask your review again?
Attachment #8377045 - Attachment is obsolete: true
Attachment #8389170 - Flags: review?(n.nethercote)
Comment on attachment 8389170 [details] [diff] [review] Part 2: Report mapped array buffer statistics for about:memory. Review of attachment 8389170 [details] [diff] [review]: ----------------------------------------------------------------- Thanks. ::: js/xpconnect/src/XPCJSRuntime.cpp @@ +2141,5 @@ > > + if (cStats.objectsExtra.nonHeapElementsMapped > 0) { > + REPORT_BYTES(cJSPathPrefix + NS_LITERAL_CSTRING("objects/non-heap/elements/mapped"), > + KIND_NONHEAP, cStats.objectsExtra.nonHeapElementsMapped, > + "Memory mapped array buffer elements."); "Memory-mapped", i.e. a hyphen is needed.
Attachment #8389170 - Flags: review?(n.nethercote) → review+
The patch is now ready for review. Hi bent, May I invite you to review this patch?
Attachment #8386664 - Attachment is obsolete: true
Attachment #8389723 - Flags: review?(bent.mozilla)
Attached patch Part 3-2: Test case to get array buffer by XHR. (obsolete) (deleted) — Splinter Review
This patch does the following: 1. Create a packaged app. 2. Install packaged app. 3. Launch the app from iframe, get array buffer by XHR, and check data is correct and memory-mapped. TODO: 1. To resolve the security check issue. The packaged app created in iframe is blocked by the URI_DANGEROUS_TO_LOAD security check at: http://dxr.mozilla.org/mozilla-central/source/caps/src/nsScriptSecurityManager.cpp#726 This check now temporarily disabled(in Gecko side) in this patch for testing. We need a way to bypass this check without modifying Gecko code. 2. Resolve try server issues. https://tbpl.mozilla.org/?tree=Try&rev=799cef54f246 Hi Fabrice, Before formal review, could you check if the overall design of this patch is ok? So that any potential issues could be addressed earlier. Thank you!
Attachment #8389728 - Flags: feedback?(fabrice)
Patch with review comments addressed.
Attachment #8386653 - Attachment is obsolete: true
Attachment #8389741 - Flags: review+
Addressed review comment.
Attachment #8389170 - Attachment is obsolete: true
Attachment #8389744 - Flags: review+
Attached patch Part 3-2: Test case to get array buffer by XHR. (obsolete) (deleted) — Splinter Review
Revised patch, temporarily bypass the security check by preference. Mochitest on b2g-desktop is running as content process, so it's not allow to create another app in content process, unless we disable the security check. After internal discussing, below are possible solutions: 1. Use SpecialPowers.loadChromeScript() to run a JS script chrome process(like bug 951997), then create an iframe and load app to test. The communicate path would be complex though. 2. Enhance SpecialPowers API to obtain system priciple privilege? Should be an easier way.
Attachment #8389728 - Attachment is obsolete: true
Attachment #8389728 - Flags: feedback?(fabrice)
Attachment #8390443 - Flags: feedback?(fabrice)
Rebased. This patch handles the OOP case, not mandatory by 1.3T.
Attachment #8373979 - Attachment is obsolete: true
Attachment #8390445 - Attachment description: Part 4: Mapping with file descriptor from remote file open. (Not needed by 1.3T) → Part 4: Mapping with file descriptor from remote file open. (Not mandatory for 1.3T)
Comment on attachment 8390443 [details] [diff] [review] Part 3-2: Test case to get array buffer by XHR. Review of attachment 8390443 [details] [diff] [review]: ----------------------------------------------------------------- Shian-Yow, that looks overall fine, except the security issues we discussed. I'm not sure what's happening that prevents you from loading from the packaged app yet. ::: caps/src/nsScriptSecurityManager.cpp @@ +611,5 @@ > NS_ENSURE_SUCCESS(rv, rv); > } > > + if (aPrincipal == mSystemPrincipal || > + Preferences::GetBool("security.pretend_system_principal", false)) { that's scary ::: dom/apps/tests/test_bug_945152.html @@ +53,5 @@ > + function runTest() { > + // Set up. > + launchableValue = SpecialPowers.setAllAppsLaunchable(true); > + SpecialPowers.setBoolPref("dom.mozBrowserFramesEnabled", true); > + SpecialPowers.setBoolPref("security.pretend_system_principal", true); you should use SpecialPowers.pushPrefEnv() instead. @@ +57,5 @@ > + SpecialPowers.setBoolPref("security.pretend_system_principal", true); > + SpecialPowers.autoConfirmAppInstall(continueTest); > + SpecialPowers.addPermission("browser", true, document); > + SpecialPowers.addPermission("embed-apps", true, document); > + SpecialPowers.addPermission("webapps-manage", true, document); You're already doing that in go().
> that's scary Er.. yes. Why is that change needed, exactly? Comment 149 is talking about creating apps, but this is disabling CheckLoadURI altogether! I'm not OK with us adding a preference like that, since users could set it and then they would be in _big_ trouble.
As mentioned in comment 145, it is a temporary way to disable security check for local testing. Changing to preference in comment 149 is for easier turning it on/off without recompiling the gecko code, while trying possible solutions to load packaged app in iframe from a content process. Not really mean we want to do it that way, sorry for confusing. Will add comments for temporary code next time.
Blocks: 958440
I backed out both parts of this just now: https://hg.mozilla.org/integration/mozilla-inbound/rev/963a4aa83275 https://hg.mozilla.org/integration/mozilla-inbound/rev/2b29e7ac771e Nothing wrong with your changes, they just had the misfortune of being squarely in code in need of immediate security-bug-fixing *right now*. Rather than attempt a parlous rebase of my patchwork atop yours, with the clock ticking, sfink and I agreed it made the most sense to just back this out to wrap up that work. Once we've finished up that stuff, this can reland with the necessary rebasing changes.
Comment on attachment 8389741 [details] [diff] [review] Part 1: Support mapped array buffer type. r=sfink Review of attachment 8389741 [details] [diff] [review]: ----------------------------------------------------------------- I hate to do this to you, but there are a couple of reasons why this can't just reland. First, I screwed up the review. The addition of SCTAG_MAPPED_ARRAY_BUFFER_OBJECT means the structured clone format has changed, and JS_STRUCTURED_CLONE_VERSION needs to be bumped. (This will end up requiring a version bump for the IndexedDB schema, but a compile error will point you at that once you bump the structured clone version.) Second, the patch for bug 982974 unfortunately collided with this patch. We'll either need to do something similar here, or wait to land it until we've rooted out all those issues. That patch has a single hasStealableContents() concept, and mapped arraybuffers aren't exactly "yes" or "no". Mapped buffers are "yes" when deciding whether to allocate a dummy backing buffer, "no" when deciding whether the dataPointer can be stolen away, but "yes" again when deciding where to copy ("yes", here, means don't copy the memory). I don't think it's all that bad, it just means changing a few of the if conditions. Third, when I thought about the Transferable semantics of these new ArrayBuffers, I'm not sure you really want what this patch implements. With this patch, when you transfer a buffer, you'll end up closing the original fd and unmapping the original memory mapping, and creating a new fd and creating a new memory mapping with it. That sorta works, but the actual virtual addresses will change during a transfer. That's true of inline and asm.js buffers already, so it's not a killer, but it does seem unexpected and inefficient. I think you want to either error out when trying to transfer these, or do a transfer by giving the new ArrayBuffer the same fd and memory mapping (and clearing it out of the neutered buffer.) Fourth, given that there's still an unreviewed patch on this bug and an open feedback question, I think it might be better at this point to re-land bug 979480 ahead of this patch. That'll give a cleaner base to work on top of. It would be optimal if the rebased version of this patch eliminated MappingInfoHeader and used slots to store those fields instead. (That wasn't really possible before bug 979480, but it's a better approach after.)
Attachment #8389741 - Flags: review+
(In reply to Steve Fink [:sfink] from comment #157) > First, I screwed up the review. The addition of > SCTAG_MAPPED_ARRAY_BUFFER_OBJECT means the structured clone format has > changed, and JS_STRUCTURED_CLONE_VERSION needs to be bumped. (This will end > up requiring a version bump for the IndexedDB schema, but a compile error > will point you at that once you bump the structured clone version.) Addressed in new patch. > > Second, the patch for bug 982974 unfortunately collided with this patch. > We'll either need to do something similar here, or wait to land it until > we've rooted out all those issues. That patch has a single > hasStealableContents() concept, and mapped arraybuffers aren't exactly "yes" > or "no". Mapped buffers are "yes" when deciding whether to allocate a dummy > backing buffer, "no" when deciding whether the dataPointer can be stolen > away, but "yes" again when deciding where to copy ("yes", here, means don't > copy the memory). I don't think it's all that bad, it just means changing a > few of the if conditions. Not sure that I fully understood the idea mentioned here. When stealing contents from a mapped array buffer, we are not sure whether the contents will be used in a mapped array buffer or not. So I understand that we should copy the mapped memory to a new allocated buffer as the new content. However, I don't understand what you mean: "yes" again when deciding where to copy ("yes", here, means don't copy the memory). It sounds like creating a new memory mapping, instead of copying it to the previously allocated buffer? > Third, when I thought about the Transferable semantics of these new > ArrayBuffers, I'm not sure you really want what this patch implements. With > this patch, when you transfer a buffer, you'll end up closing the original > fd and unmapping the original memory mapping, and creating a new fd and > creating a new memory mapping with it. That sorta works, but the actual > virtual addresses will change during a transfer. That's true of inline and > asm.js buffers already, so it's not a killer, but it does seem unexpected > and inefficient. I think you want to either error out when trying to > transfer these, or do a transfer by giving the new ArrayBuffer the same fd > and memory mapping (and clearing it out of the neutered buffer.) I think that I need a deeper study on the Transferable mechanism before properly implementing the transfer by same fd and memory mapping. Will error it out in new patch first. > > Fourth, given that there's still an unreviewed patch on this bug and an open > feedback question, I think it might be better at this point to re-land bug > 979480 ahead of this patch. That'll give a cleaner base to work on top of. > It would be optimal if the rebased version of this patch eliminated > MappingInfoHeader and used slots to store those fields instead. (That wasn't > really possible before bug 979480, but it's a better approach after.) Sure. Rebased on bug 979480 and elimited MappingInfoHeader in new patch.
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Changes in this patch: 1. Rebased on bug 979480 and bug 982974. 2. Used slots to store mapping information, eliminating MappingInfoHeader. 3. Added a case to test stealing contents from mapped array buffer. 4. Bumped JS_STRUCTURED_CLONE_VERSION. 5. Error out the transfer in StructuredClone.cpp.
Attachment #8389741 - Attachment is obsolete: true
Attachment #8394213 - Flags: review?(sphink)
Rebased.
Attachment #8389744 - Attachment is obsolete: true
Rebased.
Attachment #8389723 - Attachment is obsolete: true
Attachment #8389723 - Flags: review?(bent.mozilla)
Attachment #8394217 - Flags: review?(bent.mozilla)
Attached patch Part 1: Support mapped array buffer type. (obsolete) (deleted) — Splinter Review
Fixed nits from previous patch.
Attachment #8394213 - Attachment is obsolete: true
Attachment #8394213 - Flags: review?(sphink)
Attachment #8394225 - Flags: review?(sphink)
triage: minus as this saves memory only when dictionary is being used. dictionary will be pref-off by default. please ask for approval to land this in 1.3T when ready. thanks
blocking-b2g: 1.3T+ → -
Comment on attachment 8390443 [details] [diff] [review] Part 3-2: Test case to get array buffer by XHR. Review of attachment 8390443 [details] [diff] [review]: ----------------------------------------------------------------- Shian-Yow, that looks overall fine, except the security issues we discussed. I'm not sure what's happening that prevents you from loading from the packaged app yet. You could maybe use bug 982491 once it lands to run your app in the same process as the mochitest app. ::: caps/src/nsScriptSecurityManager.cpp @@ +611,5 @@ > NS_ENSURE_SUCCESS(rv, rv); > } > > + if (aPrincipal == mSystemPrincipal || > + Preferences::GetBool("security.pretend_system_principal", false)) { that's scary ::: dom/apps/tests/test_bug_945152.html @@ +53,5 @@ > + function runTest() { > + // Set up. > + launchableValue = SpecialPowers.setAllAppsLaunchable(true); > + SpecialPowers.setBoolPref("dom.mozBrowserFramesEnabled", true); > + SpecialPowers.setBoolPref("security.pretend_system_principal", true); you should use SpecialPowers.pushPrefEnv() instead. @@ +57,5 @@ > + SpecialPowers.setBoolPref("security.pretend_system_principal", true); > + SpecialPowers.autoConfirmAppInstall(continueTest); > + SpecialPowers.addPermission("browser", true, document); > + SpecialPowers.addPermission("embed-apps", true, document); > + SpecialPowers.addPermission("webapps-manage", true, document); You're already doing that in go().
Attachment #8390443 - Flags: feedback?(fabrice) → feedback+
(In reply to Fabrice Desré [:fabrice] from comment #164) > I'm not sure what's happening that prevents you from loading from the > packaged app yet. The security check prevents launching an app with app scheme in iframe from a parent app with http scheme. Unless parent app has chrome privilege, such as Gaia system app. However if we move the testing to chrome script in mochitest, there are only limited the APIs to use. > You could maybe use bug 982491 once it lands to run your app in the same > process as the mochitest app. Good idea. Unfortunately, when tested launching app:// in remote iframe using bug 982491, the security check CheckLoadURIWithPrincipal() is performed before CreateBrowserOrApp().
Hi bent, Sorry that this part need to be updated again. Changes in this patch: 1. Added jar:// URL support.
Attachment #8394217 - Attachment is obsolete: true
Attachment #8394217 - Flags: review?(bent.mozilla)
Attachment #8397045 - Flags: review?(bent.mozilla)
Changes in this patch: 1. Addressed previous feedback comments. 2. Disable security workaround by default. TODO: 1. Security check issue.
Attachment #8390443 - Attachment is obsolete: true
Hi Fabrice, The patch does similar test with jar:// URL. If testing by launching packaged app issue cannot be resolved shortly, probably we rely on this patch for this bug first.
Attachment #8397052 - Flags: review?(fabrice)
Follow up bugs: Bug 988813 - Support memory mapped array buffer for Windows platform Bug 988814 - Support Transferables for memory mapped array buffer
Follow up bug for comment 27: Bug 988815 - Memory mapping array buffer for blobs coming from IndexedDB
Follow up bug from comment 59: Bug 988816 - Remote file open for memory mapped array buffer in content process
Comment on attachment 8397052 [details] [diff] [review] Part 3-3: Test case to get array buffer by XHR with jar:// URL. Review of attachment 8397052 [details] [diff] [review]: ----------------------------------------------------------------- Thanks Shian-Yow!
Attachment #8397052 - Flags: review?(fabrice) → review+
Comment on attachment 8394225 [details] [diff] [review] Part 1: Support mapped array buffer type. Review of attachment 8394225 [details] [diff] [review]: ----------------------------------------------------------------- I'd like to review this again after creating a subtype. ::: js/src/jsapi.h @@ +3116,4 @@ > */ > extern JS_PUBLIC_API(JSObject *) > +JS_NewArrayBufferWithContents(JSContext *cx, size_t nbytes, void *contents, > + bool mapped = false, int fd = -1, size_t offset = 0); I'd prefer a separate function instead of the boolean. ::: js/src/vm/ArrayBufferObject.h @@ +62,2 @@ > > + static const uint8_t RESERVED_SLOTS = 6; This would create 2 unused slots in non-mapped ArrayBuffers, which would reduce the space available for inline data. I would much rather have non-mapped ArrayBuffers stay at 4 slots and give mapped ArrayBuffers 6. To do that, I'm afraid you'll need to create another ArrayBufferObject subclass, following the model of SharedArrayBufferObject. (Yes, the if statements will get verbose, but I think it's necessary.) It would be nice to have an actual subclassing mechanism for things like this, but I guess we don't have very many things like this in the engine right now. ::: js/src/vm/StructuredClone.cpp @@ +65,5 @@ > SCTAG_REGEXP_OBJECT, > SCTAG_ARRAY_OBJECT, > SCTAG_OBJECT_OBJECT, > SCTAG_ARRAY_BUFFER_OBJECT, > + SCTAG_MAPPED_ARRAY_BUFFER_OBJECT, This needs to go at the end. We want to be able to read in v1 clone buffers, which means we can't renumber the existing tags. Please add this after SCTAG_TYPED_ARRAY_OBJECT. (Unfortunately, we can't reuse SCTAG_DO_NOT_USE_1 either, because then reading in an old v1 clone would misinterpret the things that used to use that value as this new type.)
Attachment #8394225 - Flags: review?(sphink)
Actually, don't start implementing the subtype yet. Today Ben Turner, Luke Wagner, and I discussed this and decided to do things a little differently -- when cloning one of these ArrayBuffers, just copy the data into a normal ArrayBuffer on the other side if you aren't transferring. If you transfer, then the source ArrayBuffer will get neutered and the destination will be a file backed array buffer. I would expect that you would always transfer. There are some more details that I (or bent) will fill in later, since I'll get the details wrong if I try to do it right now. I just wanted to try to prevent you from doing extra work here.
Comment on attachment 8397045 [details] [diff] [review] Part 3-1: Mapping array buffer to uncompressed file in zip package. Review of attachment 8397045 [details] [diff] [review]: ----------------------------------------------------------------- Ok, I looked through this and gave a pretty mechanical review. Unfortunately I don't think I'm really the right person to do a full review though. You'll need someone like :taras or :mwu to sign off on changes to libjar, and then I'd recommend asking :smaug to review your XHR changes. ::: content/base/src/nsXMLHttpRequest.cpp @@ +1749,5 @@ > if (xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_MOZ_BLOB) { > xmlHttpRequest->mResponseBlob = nullptr; > } > + } else if ((xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || > + xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) && Nit: line up the two 'x' there? @@ +1958,5 @@ > // Set up arraybuffer > if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && NS_SUCCEEDED(status)) { > + if (mIsMappedArrayBuffer) { > + nsCOMPtr<nsIURI> uri; > + mChannel->GetURI(getter_AddRefs(uri)); The code currently uses |channel|, not |mChannel|. Are you sure it's safe to use |mChannel| here? @@ +1959,5 @@ > if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && NS_SUCCEEDED(status)) { > + if (mIsMappedArrayBuffer) { > + nsCOMPtr<nsIURI> uri; > + mChannel->GetURI(getter_AddRefs(uri)); > + if (uri) { Do we ever expect this to be null? I think we should warn or assert here. @@ +1961,5 @@ > + nsCOMPtr<nsIURI> uri; > + mChannel->GetURI(getter_AddRefs(uri)); > + if (uri) { > + nsAutoCString file; > + nsAutoCString scheme; These can just be nsCString. @@ +1972,5 @@ > + if (jarURI) { > + jarURI->GetJAREntry(file); > + } > + } > + nsCOMPtr<nsIJARChannel> jarChannel = do_QueryInterface(mChannel); I think you should check this condition first before doing any of the path manipulation above. @@ +1975,5 @@ > + } > + nsCOMPtr<nsIJARChannel> jarChannel = do_QueryInterface(mChannel); > + if (jarChannel) { > + nsIFile *jarFile; > + jarChannel->GetJarFile(&jarFile); This needs to go into a nsCOMPtr, and it should be null-checked or asserted. @@ +1980,5 @@ > + rv = mArrayBufferBuilder.mapToFileInPackage(file, jarFile); > + if (NS_FAILED(rv)) { > + // TODO: Debugging message to be removed > + printf_stderr("Mapping array buffer to %s failed.\n", file.get()); > + mIsMappedArrayBuffer = false; printfs should be removed, just use: if (NS_WARN_IF(NS_FAILED(rv))) { mIsMappedArrayBuffer = false; } And you'll get your warning. @@ +1984,5 @@ > + mIsMappedArrayBuffer = false; > + } else { > + // TODO: Debugging message to be removed > + printf_stderr("Mapped array buffer to %s.\n", file.get()); > + channel->SetContentType(NS_LITERAL_CSTRING("application/xml-mem-mapped")); xml?! @@ +1989,5 @@ > + } > + } > + } > + } > + if (!mIsMappedArrayBuffer) { Nit: else would be better here. @@ +2907,5 @@ > getter_AddRefs(mCORSPreflightChannel)); > NS_ENSURE_SUCCESS(rv, rv); > } > else { > + if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER) { What about XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER? @@ +2909,5 @@ > } > else { > + if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER) { > + nsCOMPtr<nsIURI> uri; > + nsAutoCString scheme; nsCString. @@ +3912,5 @@ > { > if (mDataPtr) { > JS_free(nullptr, mDataPtr); > } > + if (mMapPtr && mMapFd >= 0) { You should assert that |mMapPtr| can never be null if |mMapFd >= 0|, and vice-versa. @@ +4017,5 @@ > } > > +nsresult > +ArrayBufferBuilder::mapToFileInPackage(const nsCString& aFile, > + nsIFile *aJarFile) Nit: * on the left. @@ +4021,5 @@ > + nsIFile *aJarFile) > +{ > +#ifdef XP_WIN > + // To be implemented. > + return NS_ERROR_NOT_IMPLEMENTED; Might as well add a MOZ_CRASH here to make sure we see this. @@ +4036,5 @@ > + clonedFile->OpenNSPRFileDesc(PR_RDONLY, 0, &pr_fd); > + if (!pr_fd) { > + return NS_ERROR_FAILURE; > + } > + int fd = dup(PR_FileDesc2NativeHandle(pr_fd)); Please don't do this (or the OpenNSPRFileDesc above) until you actually need to use them. That will help you cut down on the number of places you have to remember to clean them up. Also, RAII helpers would be great here. See |ScopedClose| and |AutoFDClose| in FileUtils.h. ::: content/base/src/nsXMLHttpRequest.h @@ +92,5 @@ > > JSObject* getArrayBuffer(JSContext* aCx); > > + // Memory mapping to a stored(uncompressed) file in zip package. > + nsresult mapToFileInPackage(const nsCString& aFile, nsIFile *aJarFile); Nit: * on the left ::: modules/libjar/nsJARChannel.cpp @@ +827,5 @@ > > +NS_IMETHODIMP > +nsJARChannel::GetJarFile(nsIFile* *aFile) > +{ > + *aFile = mJarFile; XPCOM getters must add a reference if they return something. If you can guarantee that mJarFile is non-null then this should be: NS_ADDREF(*aFile = mJarFile) Otherwise if it might be null: NS_IF_ADDREF(*aFile = mJarFile);
Attachment #8397045 - Flags: review?(bent.mozilla)
(In reply to ben turner [:bent] (use the needinfo? flag!) from comment #175) > Comment on attachment 8397045 [details] [diff] [review] > Part 3-1: Mapping array buffer to uncompressed file in zip package. > > Review of attachment 8397045 [details] [diff] [review]: > ----------------------------------------------------------------- > > Ok, I looked through this and gave a pretty mechanical review. Unfortunately > I don't think I'm really the right person to do a full review though. You'll > need someone like :taras or :mwu to sign off on changes to libjar, and then > I'd recommend asking :smaug to review your XHR changes. OK, thanks for the review comments and recommended reviews. > > ::: content/base/src/nsXMLHttpRequest.cpp > @@ +1749,5 @@ > > if (xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_MOZ_BLOB) { > > xmlHttpRequest->mResponseBlob = nullptr; > > } > > + } else if ((xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || > > + xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) && > > Nit: line up the two 'x' there? OK. > > @@ +1958,5 @@ > > // Set up arraybuffer > > if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && NS_SUCCEEDED(status)) { > > + if (mIsMappedArrayBuffer) { > > + nsCOMPtr<nsIURI> uri; > > + mChannel->GetURI(getter_AddRefs(uri)); > > The code currently uses |channel|, not |mChannel|. Are you sure it's safe to > use |mChannel| here? Should use |channel| instead, thanks. > > @@ +1959,5 @@ > > if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && NS_SUCCEEDED(status)) { > > + if (mIsMappedArrayBuffer) { > > + nsCOMPtr<nsIURI> uri; > > + mChannel->GetURI(getter_AddRefs(uri)); > > + if (uri) { > > Do we ever expect this to be null? I think we should warn or assert here. OK. > > @@ +1961,5 @@ > > + nsCOMPtr<nsIURI> uri; > > + mChannel->GetURI(getter_AddRefs(uri)); > > + if (uri) { > > + nsAutoCString file; > > + nsAutoCString scheme; > > These can just be nsCString. Could you eleborate this? It was suggested at https://developer.mozilla.org/en-US/docs/Mozilla_internal_string_guide "If a string less than 64 code units is assigned to an nsAutoString, then no extra storage will be allocated." > > @@ +1972,5 @@ > > + if (jarURI) { > > + jarURI->GetJAREntry(file); > > + } > > + } > > + nsCOMPtr<nsIJARChannel> jarChannel = do_QueryInterface(mChannel); > > I think you should check this condition first before doing any of the path > manipulation above. OK. > > @@ +1975,5 @@ > > + } > > + nsCOMPtr<nsIJARChannel> jarChannel = do_QueryInterface(mChannel); > > + if (jarChannel) { > > + nsIFile *jarFile; > > + jarChannel->GetJarFile(&jarFile); > > This needs to go into a nsCOMPtr, and it should be null-checked or asserted. Got it. > > @@ +1980,5 @@ > > + rv = mArrayBufferBuilder.mapToFileInPackage(file, jarFile); > > + if (NS_FAILED(rv)) { > > + // TODO: Debugging message to be removed > > + printf_stderr("Mapping array buffer to %s failed.\n", file.get()); > > + mIsMappedArrayBuffer = false; > > printfs should be removed, just use: > > if (NS_WARN_IF(NS_FAILED(rv))) { > mIsMappedArrayBuffer = false; > } > > And you'll get your warning. Got it, thanks. > > @@ +1984,5 @@ > > + mIsMappedArrayBuffer = false; > > + } else { > > + // TODO: Debugging message to be removed > > + printf_stderr("Mapped array buffer to %s.\n", file.get()); > > + channel->SetContentType(NS_LITERAL_CSTRING("application/xml-mem-mapped")); > > xml?! The original content type for array buffer response is "application/xml". The "-mem-mapped" is added to indicate it's memory mapped. Yes, xml is misleading. I'm thinking about changing it to "application/mem-mapped", probably. > > @@ +1989,5 @@ > > + } > > + } > > + } > > + } > > + if (!mIsMappedArrayBuffer) { > > Nit: else would be better here. When memory mapping failed, we want it fallback to the old way. if (mIsMappedArrayBuffer) { // Do memory mapping, and fallback to old way if failed. rv = mapToFileInPackage(); if (NS_FAILED(rv)) { mIsMappedArrayBuffer = false; } } if (!mIsMappedArrayBuffer) { // Old way } > > @@ +2907,5 @@ > > getter_AddRefs(mCORSPreflightChannel)); > > NS_ENSURE_SUCCESS(rv, rv); > > } > > else { > > + if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER) { > > What about XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER? I didn't think of this type of array buffer before. Will check if it can be supported as well. > > @@ +2909,5 @@ > > } > > else { > > + if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER) { > > + nsCOMPtr<nsIURI> uri; > > + nsAutoCString scheme; > > nsCString. Same question as the above. > > @@ +3912,5 @@ > > { > > if (mDataPtr) { > > JS_free(nullptr, mDataPtr); > > } > > + if (mMapPtr && mMapFd >= 0) { > > You should assert that |mMapPtr| can never be null if |mMapFd >= 0|, and > vice-versa. OK. > > @@ +4017,5 @@ > > } > > > > +nsresult > > +ArrayBufferBuilder::mapToFileInPackage(const nsCString& aFile, > > + nsIFile *aJarFile) > > Nit: * on the left. OK. > > @@ +4021,5 @@ > > + nsIFile *aJarFile) > > +{ > > +#ifdef XP_WIN > > + // To be implemented. > > + return NS_ERROR_NOT_IMPLEMENTED; > > Might as well add a MOZ_CRASH here to make sure we see this. OK. > > @@ +4036,5 @@ > > + clonedFile->OpenNSPRFileDesc(PR_RDONLY, 0, &pr_fd); > > + if (!pr_fd) { > > + return NS_ERROR_FAILURE; > > + } > > + int fd = dup(PR_FileDesc2NativeHandle(pr_fd)); > > Please don't do this (or the OpenNSPRFileDesc above) until you actually need > to use them. That will help you cut down on the number of places you have to > remember to clean them up. The dup() and OpenNSPRFileDesc() are used here because we also have to support remote file open when doing it in content process. The current remote file open on JAR channel has limitation on multiple open()s (comment 83). But since I've filed a follow up bug to support remote file open, I will make it simple in this bug, and leave complicated things in new bug. > > Also, RAII helpers would be great here. See |ScopedClose| and |AutoFDClose| > in FileUtils.h. OK, thank you for this information. > > ::: content/base/src/nsXMLHttpRequest.h > @@ +92,5 @@ > > > > JSObject* getArrayBuffer(JSContext* aCx); > > > > + // Memory mapping to a stored(uncompressed) file in zip package. > > + nsresult mapToFileInPackage(const nsCString& aFile, nsIFile *aJarFile); > > Nit: * on the left OK. > > ::: modules/libjar/nsJARChannel.cpp > @@ +827,5 @@ > > > > +NS_IMETHODIMP > > +nsJARChannel::GetJarFile(nsIFile* *aFile) > > +{ > > + *aFile = mJarFile; > > XPCOM getters must add a reference if they return something. If you can > guarantee that mJarFile is non-null then this should be: > > NS_ADDREF(*aFile = mJarFile) > > Otherwise if it might be null: > > NS_IF_ADDREF(*aFile = mJarFile); Got it, thank you.
Flags: needinfo?(bent.mozilla)
(In reply to Shian-Yow Wu [:shianyow] from comment #176) > > These can just be nsCString. > > Could you eleborate this? > > It was suggested at > https://developer.mozilla.org/en-US/docs/Mozilla_internal_string_guide > "If a string less than 64 code units is assigned to an nsAutoString, then no > extra storage will be allocated." Ah, I'm sorry, I didn't really explain. nsAutoString has a stack buffer built in that holds 64 chars, so if you're going to be doing manipulations of short strings it saves mallocs. But in this case you're just getting a string from somewhere else (which doesn't copy into the stack space) and then using it for comparison. So the stack space is essentially just unused. It's not a big deal in any case :) > When memory mapping failed, we want it fallback to the old way. Ah yes, sorry. I missed the reassignment. > > What about XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER? > > I didn't think of this type of array buffer before. > Will check if it can be supported as well. It seems like you check both types (ARRAYBUFFER and CHUNKED_ARRAYBUFFER) together everywhere else (maybe a good argument for a small inlined function?) so it looked suspicious that only one was tested here.
Flags: needinfo?(bent.mozilla)
(In reply to Steve Fink [:sfink] from comment #174) > Actually, don't start implementing the subtype yet. Today Ben Turner, Luke > Wagner, and I discussed this and decided to do things a little differently > -- when cloning one of these ArrayBuffers, just copy the data into a normal > ArrayBuffer on the other side if you aren't transferring. If you transfer, > then the source ArrayBuffer will get neutered and the destination will be a > file backed array buffer. I would expect that you would always transfer. > > There are some more details that I (or bent) will fill in later, since I'll > get the details wrong if I try to do it right now. I just wanted to try to > prevent you from doing extra work here. Sorry for the delay. So the idea is to use a flag bit on ArrayBufferObject to specify that it needs to be cleaned up via unmapping rather than freeing. Then when cloning normally, a mapped ArrayBuffer will be converted to a plain ArrayBuffer (the data gets copied.) When transferring, the origin ArrayBuffer will be neutered and the new ArrayBuffer will inherit the flag bit instead. It seems like there's no need to pass along the file descriptor with the ArrayBuffer. The pointer to memory is enough, as long as you know whether to unmap it or free it. If that is incorrect, we'll need to reconsider. So when creating one of these ArrayBuffers, you'll pass in the file descriptor to the creation function, and it'll do the mapping. Afterwards, the caller should be able to close the file descriptor and forget about it. Also, the gc/Memory.cpp code doesn't need to accommodate a header anymore, which should simplify it.
Attached patch (WIP) Transfer mapped array buffer object (obsolete) (deleted) — Splinter Review
Steve, do you have an example how to transfer an object in JSAutoStructuredCloneBuffer? In the WIP patch, I added a new test TestTransferObject(), and tried to pass an array buffer object as transferable in JSAutoStructuredCloneBuffer::write(), but it failed in parseTransferable() when doing JS_IsArrayObject() check.
Flags: needinfo?(sphink)
(In reply to Shian-Yow Wu [:shianyow] from comment #179) > Created attachment 8403241 [details] [diff] [review] > (WIP) Transfer mapped array buffer object > > Steve, do you have an example how to transfer an object in > JSAutoStructuredCloneBuffer? > > In the WIP patch, I added a new test TestTransferObject(), and tried to pass > an array buffer object as transferable in > JSAutoStructuredCloneBuffer::write(), but it failed in parseTransferable() > when doing JS_IsArrayObject() check. You have to pass in an Array of transferable values. So in this case, you'd create an Array and insert an ArrayBufferObject into it. I should really have an example in jsapi-tests.
Flags: needinfo?(sphink)
The patch supports mapped array buffer based on comment 178. Removing fd and transfeing mapped array buffer when cloning makes the mapped array buffer code simpler. However there is one issue we need to address: If the JS code making xhr request in worker, once the response ready, there will be totally 3 DispatchDOMEvent() sent from main thread XHR to XHR in workers, triggering the transfer by cloning the mapped array buffer. First here: http://dxr.mozilla.org/mozilla-central/source/content/base/src/nsXMLHttpRequest.cpp#3302 Then here: http://dxr.mozilla.org/mozilla-central/source/content/base/src/nsXMLHttpRequest.cpp#1448 And then here when same function been called again: http://dxr.mozilla.org/mozilla-central/source/content/base/src/nsXMLHttpRequest.cpp#1451 Every even triggered a cloning by transfer. The 1st transfer will be successful, but the 2nd and 3rd transfer would fail because the mapped array buffer response already been neutered at the 1st transfer. This is not a problem for async XHR, but sync XHR would fail. I am not sure whether it is a bug of current XHR implementation. But since we removed file descriptor information in new implementation, we are not able to clone multiple mapped array buffer instances at the same time. So either we limit the XHR cloning to once(I hope so), or we support multiple cloning (which requires file descriptor information in array buffer object). Any thoughts?
Attachment #8403241 - Attachment is obsolete: true
Attachment #8404636 - Flags: review?(sphink)
This is the XHR implementation for mapped array buffer, which: 1. Supported transferables when cloning. 2. Addressed comment 176. 3. Added pref "dom.mapped_arraybuffer.enabled". TODO: 1. Getting mapped array buffer by sync XHR may fail (see comment 181).
(In reply to ben turner [:bent] (use the needinfo? flag!) from comment #177) > It seems like you check both types (ARRAYBUFFER and CHUNKED_ARRAYBUFFER) > together everywhere else (maybe a good argument for a small inlined > function?) so it looked suspicious that only one was tested here. According to definition of moz-chunked-arraybuffer in MDN, it's not valid for memory mapped case. https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest So we should only support ARRAYBUFFER type. There were checks for both types were there in original code, and the patch added "!xmlHttpRequest->mIsMappedArrayBuffer" to those places to avoid doing what we don't need for mapped array buffer.
Changes in this patch: 1. Moved XHR test into worker. 2. Added setting pref "dom.mapped_arraybuffer.enabled". Fabrice, some enhancements added to previous patch you reviewed. Could you please review it again?
Attachment #8397052 - Attachment is obsolete: true
Attachment #8404648 - Flags: review?(fabrice)
Hi smaug, as bent recommend, may I invite you to review the XHR changes for this bug? The latest WIP patch related to XHR change is in comment 182, and there is still one issue to solve before asking for review. The test case related to it is in part 3-3, I asked Fabrice to review as he reivewed it last time. If you think it's related I will put a review flag on you too. Ni from bent and smaug, for issue stated in comment 181.
Flags: needinfo?(bugs)
Flags: needinfo?(bent.mozilla)
Comment on attachment 8404643 [details] [diff] [review] Mapping array buffer to uncompressed file in zip package. I could review >- } else if (xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || >- xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) { >+ } else if ((xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || >+ xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) && >+ !xmlHttpRequest->mIsMappedArrayBuffer) { Is it ever possible to have XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER and xmlHttpRequest->mIsMappedArrayBuffer? >+ArrayBufferBuilder::mapToFileInPackage(const nsCString& aFile, >+ nsIFile* aJarFile) Why lowercase 'm' ? Or does the whole ArrayBufferBuilder use this odd coding style... I guess so.
(In reply to Olli Pettay [:smaug] from comment #186) > Comment on attachment 8404643 [details] [diff] [review] > Mapping array buffer to uncompressed file in zip package. > > I could review > > > >- } else if (xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || > >- xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) { > >+ } else if ((xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || > >+ xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) && > >+ !xmlHttpRequest->mIsMappedArrayBuffer) { > Is it ever possible to have XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER and > xmlHttpRequest->mIsMappedArrayBuffer? No, mIsMappedArrayBuffer only has chance be true when mRespnoseType is XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER. To be more precise the code might be written like below. Do you suggest this way? } else if (xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && !xmlHttpRequest->mIsMappedArrayBuffer) || xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) { > > >+ArrayBufferBuilder::mapToFileInPackage(const nsCString& aFile, > >+ nsIFile* aJarFile) > Why lowercase 'm' ? Or does the whole ArrayBufferBuilder use this odd coding > style... I guess so. Yes, unfortunately.
Attachment #8404648 - Flags: review?(fabrice) → review+
Comment on attachment 8404636 [details] [diff] [review] Support mapped array buffer type and transferable. Review of attachment 8404636 [details] [diff] [review]: ----------------------------------------------------------------- Yeah, the sync XHR case is annoying. I wish I had a good answer for that. But it does seem a little weird to have 3 "copies" of the data, especially if they're using up 3 file descriptors. ::: js/src/vm/ArrayBufferObject.cpp @@ +526,5 @@ > > +void * > +ArrayBufferObject::createMappedArrayBuffer(int fd, size_t offset, size_t length) > +{ > + return AllocateMappedContent(fd, offset, length, 8); Please name this constant. static const size_t or #define, either is fine. @@ +1101,5 @@ > +JS_IsMappedArrayBufferObject(JSObject *obj) > +{ > + obj = CheckedUnwrap(obj); > + if (!obj) > + return nullptr; false
Attachment #8404636 - Flags: review?(sphink) → review+
(In reply to Steve Fink [:sfink] from comment #188) > Comment on attachment 8404636 [details] [diff] [review] > Support mapped array buffer type and transferable. > > Review of attachment 8404636 [details] [diff] [review]: > ----------------------------------------------------------------- > > Yeah, the sync XHR case is annoying. I wish I had a good answer for that. > But it does seem a little weird to have 3 "copies" of the data, especially > if they're using up 3 file descriptors. Yes, making 3 copies of same data is strange, and I've figured out how to avoid that. Thanks for your review. > > ::: js/src/vm/ArrayBufferObject.cpp > @@ +526,5 @@ > > > > +void * > > +ArrayBufferObject::createMappedArrayBuffer(int fd, size_t offset, size_t length) > > +{ > > + return AllocateMappedContent(fd, offset, length, 8); > > Please name this constant. static const size_t or #define, either is fine. OK, will use static const size_t. > > @@ +1101,5 @@ > > +JS_IsMappedArrayBufferObject(JSObject *obj) > > +{ > > + obj = CheckedUnwrap(obj); > > + if (!obj) > > + return nullptr; > > false Thanks.
Addressed review comments.
Attachment #8394225 - Attachment is obsolete: true
Attachment #8404636 - Attachment is obsolete: true
Attachment #8406094 - Flags: review+
Attachment #8394216 - Flags: review+
Attached patch Part 3-1: New libjar APIs. (deleted) — Splinter Review
aklotz, could you help to review on libjar changes?
Attachment #8397045 - Attachment is obsolete: true
Attachment #8406098 - Flags: review?(aklotz)
(Clearing the flag. As I said, I could review.)
Flags: needinfo?(bugs)
Attached patch Part 3-2: XHR changes for mapping array buffer. (obsolete) (deleted) — Splinter Review
smaug, the patch is ready for review.
Attachment #8404643 - Attachment is obsolete: true
Attachment #8406102 - Flags: review?(bugs)
Clearing ni flag, as the issue in comment 181 resolved.
Flags: needinfo?(bent.mozilla)
The patch added test case with sync XHR, for issue in comment 181.
Attachment #8406104 - Flags: review?(bugs)
Attachment #8397049 - Attachment description: Part 3-2: Test case to get array buffer by XHR with app:// URL. → Test case to get array buffer by XHR with app:// URL.
We have other places where we could mmap instead of read compressed files from the jar from C++ (fonts). Could we use the same trick there?
Comment on attachment 8406098 [details] [diff] [review] Part 3-1: New libjar APIs. Review of attachment 8406098 [details] [diff] [review]: ----------------------------------------------------------------- ::: modules/libjar/nsJARChannel.cpp @@ +825,5 @@ > return NS_OK; > } > > +NS_IMETHODIMP > +nsJARChannel::GetJarFile(nsIFile* *aFile) Nit: please declare the parameter as nsIFile **aFile for consistency with the rest of the code in this file.
Attachment #8406098 - Flags: review?(aklotz) → review+
Attachment #8406104 - Flags: review?(bugs) → review+
Comment on attachment 8406102 [details] [diff] [review] Part 3-2: XHR changes for mapping array buffer. >- } else if (xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || >- xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) { >+ } else if ((xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER || >+ xmlHttpRequest->mResponseType == XML_HTTP_RESPONSE_TYPE_CHUNKED_ARRAYBUFFER) && >+ !xmlHttpRequest->mIsMappedArrayBuffer) { Do here what you said in an earlier comment. mIsMappedArrayBuffer can be true only with XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER > // Set up arraybuffer > if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && NS_SUCCEEDED(status)) { >- int64_t contentLength; >- rv = channel->GetContentLength(&contentLength); >- if (NS_SUCCEEDED(rv) && >- contentLength > 0 && >- contentLength < XML_HTTP_REQUEST_MAX_CONTENT_LENGTH_PREALLOCATE) { >- mArrayBufferBuilder.setCapacity(static_cast<int32_t>(contentLength)); >+ if (mIsMappedArrayBuffer) { >+ nsCOMPtr<nsIJARChannel> jarChannel = do_QueryInterface(channel); >+ if (jarChannel) { >+ nsCOMPtr<nsIURI> uri; >+ rv = channel->GetURI(getter_AddRefs(uri)); >+ if (NS_SUCCEEDED(rv)) { >+ nsCString file; >+ nsCString scheme; Use nsAuto*String for string objects on stack. Same also elsewhere >+ uri->GetScheme(scheme); >+ if (scheme.LowerCaseEqualsLiteral("app")) { >+ uri->GetPath(file); >+ file.Trim("/", true, false, false); This needs some comments. And tests. >+ } else if (scheme.LowerCaseEqualsLiteral("jar")) { >+ nsCOMPtr<nsIJARURI> jarURI = do_QueryInterface(uri); >+ if (jarURI) { >+ jarURI->GetJAREntry(file); >+ } >+ } >+ nsCOMPtr<nsIFile> jarFile; >+ jarChannel->GetJarFile(getter_AddRefs(jarFile)); >+ rv = mArrayBufferBuilder.mapToFileInPackage(file, jarFile); >+ if (NS_WARN_IF(NS_FAILED(rv))) { >+ mIsMappedArrayBuffer = false; >+ } else { >+ channel->SetContentType(NS_LITERAL_CSTRING("application/mem-mapped")); >+ } >+ } >+ } >+ } >+ if (!mIsMappedArrayBuffer) { You have if (mIsMappedArrayBuffer) { ... } if (!mIsMappedArrayBuffer) { ... } if-else would look better. >+ if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && >+ Preferences::GetBool("dom.mapped_arraybuffer.enabled", false)) { >+ nsCOMPtr<nsIURI> uri; >+ nsCString scheme; >+ >+ rv = mChannel->GetURI(getter_AddRefs(uri)); >+ if (NS_SUCCEEDED(rv)) { >+ uri->GetScheme(scheme); >+ if ((scheme.LowerCaseEqualsLiteral("app") || >+ scheme.LowerCaseEqualsLiteral("jar")) && >+ !mIsMappedArrayBuffer) { >+ mIsMappedArrayBuffer = true; >+ } >+ } >+ } You should set mIsMappedArrayBuffer to false if we're not going to use it. One may reuse XHR. >+nsresult >+ArrayBufferBuilder::mapToFileInPackage(const nsCString& aFile, >+ nsIFile* aJarFile) >+{ >+#ifdef XP_WIN >+ // TODO: Bug 988813 - Support memory mapped array buffer for Windows platform. >+ MOZ_CRASH("Not implemented"); >+ return NS_ERROR_NOT_IMPLEMENTED; >+#else Hmm, we need to test this stuff on all the platforms, and then perhaps disable those tests on windows. I don't see where the test enables this stuff I'd like to see tests for larger data, something which less likely fits into necko's buffers at once.
Attachment #8406102 - Flags: review?(bugs) → review-
(In reply to Andreas Gal :gal from comment #196) > We have other places where we could mmap instead of read compressed files > from the jar from C++ (fonts). Could we use the same trick there? If the font files are stored in jar, the same trick can be applied there. 1. Re-package to store font files as uncompressed in jar. 2. Create a mmap ptr to the file offset in jar. 3. Pass the mmap ptr to wherever needed. Looking at the font code, I think we can pass the mmap ptr instead of malloc buffer into FT_New_Memory_Face(). http://hg.mozilla.org/mozilla-central/file/215080b813a7/gfx/thebes/gfxFT2FontList.cpp 122 if (aFontEntry->mFilename[0] != '/') { 123 nsRefPtr<nsZipArchive> reader = 124 Omnijar::GetReader(Omnijar::Type::GRE); 125 nsZipItem *item = reader->GetItem(aFontEntry->mFilename.get()); 126 NS_ASSERTION(item, "failed to find zip entry"); 127 128 uint32_t bufSize = item->RealSize(); 129 mFontDataBuf = static_cast<uint8_t*>(moz_malloc(bufSize)); 130 if (mFontDataBuf) { 131 nsZipCursor cursor(item, reader, mFontDataBuf, bufSize); 132 cursor.Copy(&bufSize); 133 NS_ASSERTION(bufSize == item->RealSize(), 134 "error reading bundled font"); 135 136 if (FT_Err_Ok != FT_New_Memory_Face(ft, mFontDataBuf, bufSize, 137 aFontEntry->mFTFontIndex, 138 &mFace)) {
Attached patch Part 3-2: XHR changes for mapped array buffer. (obsolete) (deleted) — Splinter Review
Attachment #8406102 - Attachment is obsolete: true
Attachment #8408848 - Flags: review?(bugs)
Support larger data as suggested in comment 198.
Attachment #8404648 - Attachment is obsolete: true
Attachment #8408852 - Flags: review?(bugs)
1. Support larger data as suggested in comment 198. 2. Fixed security issue by running from mochitest-chrome. Note: This test currently only enabled for b2g. Unfortunatly, try server didn't mochi-chrome test for b2g desktop. We'll enable it for linux platform when bug 998243 fixed.
Attachment #8397049 - Attachment is obsolete: true
Attachment #8406104 - Attachment is obsolete: true
Attachment #8408858 - Flags: review?(bugs)
Depends on: 998243
Attachment #8408858 - Attachment is obsolete: true
Attachment #8408858 - Flags: review?(bugs)
Attachment #8408895 - Flags: review?(fabrice)
Attachment #8408895 - Flags: review?(bugs)
Summary of changes: 1. Fix security issue by running from mochitest-chrome. 2. Support larger data as suggested in comment 198. 3. Enable test for Linux platform. 4. Uninstall app in the end.
Attachment #8408895 - Attachment is obsolete: true
Attachment #8408895 - Flags: review?(fabrice)
Attachment #8408895 - Flags: review?(bugs)
Attachment #8408941 - Flags: review?(fabrice)
Attachment #8408941 - Flags: review?(bugs)
Cancel previous review request for try server failure. This is the revised patch, and new try server result. https://tbpl.mozilla.org/?tree=Try&rev=bbfac5f49850
Attachment #8408941 - Attachment is obsolete: true
Attachment #8408941 - Flags: review?(fabrice)
Attachment #8408941 - Flags: review?(bugs)
Attachment #8409593 - Flags: review?(fabrice)
Attachment #8409593 - Flags: review?(bugs)
Depends on: 999140
Will bug 999140 affect to the patches in this bug?
Comment on attachment 8408852 [details] [diff] [review] Part 3-3: Test case to get array buffer by XHR with jar:// URL. You call checkData in many places and that will then start running next test because of cb(). Yet there are ok() checks after checkData(). So could you move ok(ct.indexOf("mem-mapped") != -1, "Data is memory-mapped"); etc to happen before checkData() I'm missing something. Why does test_non_mapped() end up dealing non-mapped data?
Attachment #8408852 - Flags: review?(bugs) → review-
(In reply to Olli Pettay [:smaug] from comment #209) > > I'm missing something. Why does test_non_mapped() end up dealing > non-mapped data? I mean this part means some comments, same also in app: test. Why is data_2.txt different to data_1.txt
Comment on attachment 8409593 [details] [diff] [review] Part 3-4: Test case to get array buffer by XHR with app:// URL. ># HG changeset patch ># Parent d59d7a0974f964eed0cd110249cbc723db49f79c ># User Shian-Yow Wu <swu@mozilla.com> >Bug 945152 - Part 3-4: Test case to get array buffor by XHR with app:// URL. > >diff --git a/dom/apps/tests/chrome.ini b/dom/apps/tests/chrome.ini >--- a/dom/apps/tests/chrome.ini >+++ b/dom/apps/tests/chrome.ini >@@ -1,7 +1,12 @@ > [DEFAULT] >+support-files = >+ file_bug_945152.html >+ file_bug_945152.sjs > > [test_apps_service.xul] >+[test_bug_945152.html] >+run-if = os == 'linux' > [test_operator_app_install.js] > [test_operator_app_install.xul] > # bug 928262 > skip-if = os == "win" >diff --git a/dom/apps/tests/file_bug_945152.html b/dom/apps/tests/file_bug_945152.html >new file mode 100644 >--- /dev/null >+++ b/dom/apps/tests/file_bug_945152.html >@@ -0,0 +1,90 @@ >+<html> >+<head> >+<script> >+ >+ var gData1 = "TEST_DATA_1:ABCDEFGHIJKLMNOPQRSTUVWXYZ"; >+ var gData2 = "TEST_DATA_2:1234567890"; >+ var gPaddingChar = '.'; >+ var gPaddingSize = 10000; >+ var gPadding = ""; >+ >+ for (var i = 0; i < gPaddingSize; i++) { >+ gPadding += gPaddingChar; >+ } >+ >+ function sendMessage(msg) { >+ alert(msg); >+ } >+ >+ function ok(p, msg) { >+ if (p) >+ sendMessage("OK: " + msg); >+ else >+ sendMessage("KO: " + msg); >+ } >+ >+ function testXHR(file, data_head, mapped, cb) { >+ var xhr = new XMLHttpRequest(); >+ xhr.open('GET', file); >+ xhr.responseType = 'arraybuffer'; >+ xhr.onreadystatechange = function xhrReadystatechange() { >+ if (xhr.readyState !== xhr.DONE) { >+ return; >+ } >+ if (xhr.status && xhr.status == 200) { >+ var data = xhr.response; >+ ok(data, "Data is non-null"); >+ var str = String.fromCharCode.apply(null, Uint8Array(data)); >+ ok(str.length == data_head.length + gPaddingSize, "Data size is correct"); >+ ok(str.slice(0, data_head.length) == data_head, "Data head is correct"); >+ ok(str.slice(data_head.length) == gPadding, "Data padding is correct"); >+ var ct = xhr.getResponseHeader("Content-Type"); >+ if (mapped) { >+ ok(ct.indexOf("mem-mapped") != -1, "Data is memory-mapped"); >+ } else { >+ ok(ct.indexOf("mem-mapped") == -1, "Data is not memory-mapped"); >+ } >+ cb(); >+ } else { >+ ok(false, "XHR error: " + xhr.status + " - " + xhr.statusText + "\n"); >+ cb(); >+ } >+ } >+ xhr.send(); >+ } >+ >+ // Memory-mapped array buffer. >+ function test_mapped() { >+ testXHR('data_1.txt', gData1, true, runTests); >+ } >+ >+ // Non memory-mapped array buffer. >+ function test_non_mapped() { >+ testXHR('data_2.txt', gData2, false, runTests); >+ } >+ >+ var tests = [ >+ test_mapped, >+ test_non_mapped >+ ]; >+ >+ function runTests() { >+ if (!tests.length) { >+ sendMessage("DONE"); >+ return; >+ } >+ >+ var test = tests.shift(); >+ test(); >+ } >+ >+ function go() { >+ ok(true, "Launched app"); >+ runTests(); >+ } >+ >+</script> >+</head> >+<body onload="go();"> >+</body> >+</html> >diff --git a/dom/apps/tests/file_bug_945152.sjs b/dom/apps/tests/file_bug_945152.sjs >new file mode 100644 >--- /dev/null >+++ b/dom/apps/tests/file_bug_945152.sjs >@@ -0,0 +1,115 @@ >+const { classes: Cc, interfaces: Ci, utils: Cu } = Components; >+ >+// From prio.h >+const PR_RDWR = 0x04; >+const PR_CREATE_FILE = 0x08; >+const PR_TRUNCATE = 0x20; >+ >+Cu.import("resource://gre/modules/FileUtils.jsm"); >+Cu.import("resource://gre/modules/NetUtil.jsm"); >+ >+var gBasePath = "chrome/dom/apps/tests/"; >+var gAppPath = gBasePath + "file_bug_945152.html"; >+var gData1 = "TEST_DATA_1:ABCDEFGHIJKLMNOPQRSTUVWXYZ"; >+var gData2 = "TEST_DATA_2:1234567890"; >+var gPaddingChar = '.'; >+var gPaddingSize = 10000; >+var gPackageName = "file_bug_945152.zip"; >+var gManifestData = { >+ name: "Testing app", >+ description: "Testing app", >+ package_path: "http://test/chrome/dom/apps/tests/file_bug_945152.sjs?getPackage=1", >+ launch_path: "/index.html", >+ developer: { >+ name: "devname", >+ url: "http://dev.url" >+ }, >+ default_locale: "en-US" >+}; >+ >+function handleRequest(request, response) { >+ var query = getQuery(request); >+ >+ // Create the packaged app. >+ if ("createApp" in query) { >+ var zipWriter = Cc["@mozilla.org/zipwriter;1"] >+ .createInstance(Ci.nsIZipWriter); >+ var zipFile = FileUtils.getFile("TmpD", [gPackageName]); >+ zipWriter.open(zipFile, PR_RDWR | PR_CREATE_FILE | PR_TRUNCATE); >+ >+ var manifest = JSON.stringify(gManifestData); >+ addZipEntry(zipWriter, manifest, "manifest.webapp", Ci.nsIZipWriter.COMPRESSION_BEST); >+ var app = readFile(gAppPath, false); >+ addZipEntry(zipWriter, app, "index.html", Ci.nsIZipWriter.COMPRESSION_BEST); >+ var padding = ""; >+ for (var i = 0; i < gPaddingSize; i++) { >+ padding += gPaddingChar; >+ } >+ var data = gData1 + padding; >+ addZipEntry(zipWriter, data, "data_1.txt", Ci.nsIZipWriter.COMPRESSION_NONE); >+ data = gData2 + padding; >+ addZipEntry(zipWriter, data, "data_2.txt", Ci.nsIZipWriter.COMPRESSION_BEST); >+ >+ zipWriter.alignStoredFiles(4096); >+ zipWriter.close(); >+ >+ response.setHeader("Content-Type", "text/html", false); >+ response.write("OK"); >+ return; >+ } >+ >+ // Check if we're generating a webapp manifest. >+ if ("getManifest" in query) { >+ response.setHeader("Content-Type", "application/x-web-app-manifest+json", false); >+ response.write(JSON.stringify(gManifestData)); >+ return; >+ } >+ >+ // Serve the application package. >+ if ("getPackage" in query) { >+ var resource = readFile(gPackageName, true); >+ response.setHeader("Content-Type", >+ "Content-Type: application/java-archive", false); >+ response.write(resource); >+ return; >+ } >+ >+ response.setHeader("Content-type", "text-html", false); >+ response.write("KO"); >+} >+ >+function getQuery(request) { >+ var query = {}; >+ request.queryString.split('&').forEach(function (val) { >+ var [name, value] = val.split('='); >+ query[name] = unescape(value); >+ }); >+ return query; >+} >+ >+// File and resources helpers >+ >+function addZipEntry(zipWriter, entry, entryName, compression) { >+ var stream = Cc["@mozilla.org/io/string-input-stream;1"] >+ .createInstance(Ci.nsIStringInputStream); >+ stream.setData(entry, entry.length); >+ zipWriter.addEntryStream(entryName, Date.now(), >+ compression, stream, false); >+} >+ >+function readFile(path, fromTmp) { >+ var dir = fromTmp ? "TmpD" : "CurWorkD"; >+ var file = Cc["@mozilla.org/file/directory_service;1"] >+ .getService(Ci.nsIProperties) >+ .get(dir, Ci.nsILocalFile); >+ var fstream = Cc["@mozilla.org/network/file-input-stream;1"] >+ .createInstance(Ci.nsIFileInputStream); >+ var split = path.split("/"); >+ for(var i = 0; i < split.length; ++i) { >+ file.append(split[i]); >+ } >+ fstream.init(file, -1, 0, 0); >+ var data = NetUtil.readInputStreamToString(fstream, fstream.available()); >+ fstream.close(); >+ return data; >+} >diff --git a/dom/apps/tests/test_bug_945152.html b/dom/apps/tests/test_bug_945152.html >new file mode 100644 >--- /dev/null >+++ b/dom/apps/tests/test_bug_945152.html >@@ -0,0 +1,164 @@ >+<!DOCTYPE HTML> >+<html> >+<!-- >+https://bugzilla.mozilla.org/show_bug.cgi?id=945152 >+--> >+<head> >+ <meta charset="utf-8"> >+ <title>Test for Bug 945152</title> >+ <script type="application/javascript" src="chrome://mochikit/content/tests/SimpleTest/SimpleTest.js"></script> >+ <link rel="stylesheet" type="text/css" href="chrome://mochikit/content/tests/SimpleTest/test.css"/> >+ <script type="application/javascript;version=1.7"> >+ >+ const { classes: Cc, interfaces: Ci, utils: Cu } = Components; >+ >+ SimpleTest.waitForExplicitFinish(); >+ >+ const gBaseURL = 'http://test/chrome/dom/apps/tests/'; >+ const gSJS = gBaseURL + 'file_bug_945152.sjs'; >+ var gGenerator = runTest(); >+ >+ // When using SpecialPowers.autoConfirmAppInstall, it skips the local >+ // installation, and we need this mock to get correct package path. >+ Cu.import("resource://gre/modules/Services.jsm"); >+ Cu.import("resource://gre/modules/WebappOSUtils.jsm"); >+ var oldWebappOSUtils = WebappOSUtils; >+ WebappOSUtils.getPackagePath = function(aApp) { >+ return aApp.basePath + "/" + aApp.id; >+ } >+ >+ SimpleTest.registerCleanupFunction(() => { >+ WebappOSUtils = oldWebappOSUtils; >+ }); >+ >+ function go() { >+ gGenerator.next(); >+ } >+ >+ function continueTest() { >+ try { gGenerator.next(); } >+ catch (e) { dump("Got exception: " + e + "\n"); } >+ } >+ >+ function mozAppsError() { >+ ok(false, "mozApps error: " + this.error.name); >+ finish(); >+ } >+ >+ function xhrError(event, url) { >+ var xhr = event.target; >+ ok(false, "XHR error loading " + url + ": " + xhr.status + " - " + >+ xhr.statusText); >+ finish(); >+ } >+ >+ function xhrAbort(url) { >+ ok(false, "XHR abort loading " + url); >+ finish(); >+ } >+ >+ function runTest() { >+ // Set up. >+ SpecialPowers.setAllAppsLaunchable(true); >+ SpecialPowers.pushPrefEnv({ >+ "set": [ >+ ["dom.mozBrowserFramesEnabled", true], >+ ["dom.mapped_arraybuffer.enabled", true] >+ ] >+ }, continueTest); >+ yield undefined; >+ >+ SpecialPowers.autoConfirmAppInstall(continueTest); >+ yield undefined; >+ >+ // Create app on server side. >+ createApp(continueTest); >+ yield undefined; >+ >+ // Install app. >+ var app; >+ navigator.mozApps.mgmt.oninstall = function(e) { >+ ok(true, "Got oninstall event"); >+ app = e.application; >+ app.ondownloaderror = mozAppsError; >+ app.ondownloadsuccess = continueTest; >+ }; >+ >+ var req = navigator.mozApps.installPackage(gSJS + '?getManifest=1'); >+ req.onerror = mozAppsError; >+ req.onsuccess = function() { >+ ok(true, "Application installed"); >+ }; >+ yield undefined; >+ >+ // Launch app. >+ launchApp(app, continueTest); >+ yield undefined; >+ >+ // Uninstall app. >+ var req = navigator.mozApps.mgmt.uninstall(app); >+ req.onerror = mozAppsError; >+ req.onsuccess = continueTest; >+ yield undefined; >+ >+ // All done. >+ ok(true, "All done"); >+ finish(); >+ } >+ >+ function createApp(cb) { >+ var xhr = new XMLHttpRequest(); >+ var url = gSJS + '?createApp=1'; >+ xhr.addEventListener("load", function() { is(xhr.responseText, "OK", "createApp OK"); cb(); }); >+ xhr.addEventListener("error", event => xhrError(event, url)); >+ xhr.addEventListener("abort", event => xhrAbort(url)); >+ xhr.open('GET', url, true); >+ xhr.send(); >+ } >+ >+ function launchApp(app, cb) { >+ // Set up the app. >+ var ifr = document.createElement('iframe'); >+ ifr.setAttribute('mozbrowser', 'true'); >+ ifr.setAttribute('mozapp', app.manifestURL); >+ ifr.setAttribute('src', app.origin + app.manifest.launch_path); >+ var domParent = document.getElementById('container'); >+ >+ // Set us up to listen for messages from the app. >+ var listener = function(e) { >+ var message = e.detail.message; >+ if (/OK/.exec(message)) { >+ ok(true, "Message from app: " + message); >+ } else if (/KO/.exec(message)) { >+ ok(false, "Message from app: " + message); >+ } else if (/DONE/.exec(message)) { >+ ok(true, "Messaging from app complete"); >+ ifr.removeEventListener('mozbrowsershowmodalprompt', listener); >+ domParent.removeChild(ifr); >+ cb(); >+ } >+ } >+ >+ // This event is triggered when the app calls "alert". >+ ifr.addEventListener('mozbrowsershowmodalprompt', listener, false); >+ >+ // Add the iframe to the DOM, triggering the launch. >+ domParent.appendChild(ifr); >+ } >+ >+ function finish() { >+ SimpleTest.finish(); >+ } >+ >+ </script> >+</head> >+<body onload="go()"> >+<a target="_blank" href="https://bugzilla.mozilla.org/show_bug.cgi?id=945152">Mozilla Bug 945152</a> >+<p id="display"></p> >+<div id="content" style="display: none"> >+</div> >+<pre id="test"> >+</pre> >+<div id="container"></div> >+</body> >+</html>
Attachment #8409593 - Flags: review?(bugs) → review+
Comment on attachment 8408848 [details] [diff] [review] Part 3-2: XHR changes for mapped array buffer. > rv = NS_StartCORSPreflight(mChannel, listener, > mPrincipal, withCredentials, > mCORSUnsafeHeaders, > getter_AddRefs(mCORSPreflightChannel)); > NS_ENSURE_SUCCESS(rv, rv); > } > else { >+ mIsMappedArrayBuffer = false; >+ if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && >+ Preferences::GetBool("dom.mapped_arraybuffer.enabled", false)) { >+ nsCOMPtr<nsIURI> uri; >+ nsAutoCString scheme; >+ >+ rv = mChannel->GetURI(getter_AddRefs(uri)); >+ if (NS_SUCCEEDED(rv)) { >+ uri->GetScheme(scheme); >+ if (scheme.LowerCaseEqualsLiteral("app") || >+ scheme.LowerCaseEqualsLiteral("jar")) { >+ mIsMappedArrayBuffer = true; >+ } >+ } >+ } Why we don't support mapped arraybuffer when cors is used? > ArrayBufferBuilder::getArrayBuffer(JSContext* aCx) > { >+ if (mMapPtr) { >+ JSObject* obj = JS_NewMappedArrayBufferWithContents(aCx, mLength, mMapPtr); >+ if (!obj) { >+ JS_ReleaseMappedArrayBufferContents(mMapPtr, mLength); >+ } >+ mMapPtr = nullptr; >+ >+ return obj; So obj will release MappedArrayBuffer when it is finalized? If so, add a comment. >+ // If file was added to the package as stored(uncompressed), map to the >+ // offset of file in zip package. This is so magical that I hope it will be documented somewhere else too. I'd like to see either cors fixed, or some explanation why it isn't supported.
Attachment #8408848 - Flags: review?(bugs) → review-
(In reply to Olli Pettay [:smaug] from comment #209) > Comment on attachment 8408852 [details] [diff] [review] > Part 3-3: Test case to get array buffer by XHR with jar:// URL. > > You call checkData in many places and that will then start running next test > because of cb(). Yet there are ok() checks after checkData(). > So could you move ok(ct.indexOf("mem-mapped") != -1, "Data is > memory-mapped"); etc to happen before checkData() Thank you for catching this! > > > I'm missing something. Why does test_non_mapped() end up dealing > non-mapped data? The purpose is to make sure that when "dom.mapped_arraybuffer.enabled" was true, the compressed file in package must not be handled by memory mapping, and the data still been retrieved correctly.
(In reply to Olli Pettay [:smaug] from comment #212) > Comment on attachment 8408848 [details] [diff] [review] > Part 3-2: XHR changes for mapped array buffer. > > > rv = NS_StartCORSPreflight(mChannel, listener, > > mPrincipal, withCredentials, > > mCORSUnsafeHeaders, > > getter_AddRefs(mCORSPreflightChannel)); > > NS_ENSURE_SUCCESS(rv, rv); > > } > > else { > >+ mIsMappedArrayBuffer = false; > >+ if (mResponseType == XML_HTTP_RESPONSE_TYPE_ARRAYBUFFER && > >+ Preferences::GetBool("dom.mapped_arraybuffer.enabled", false)) { > >+ nsCOMPtr<nsIURI> uri; > >+ nsAutoCString scheme; > >+ > >+ rv = mChannel->GetURI(getter_AddRefs(uri)); > >+ if (NS_SUCCEEDED(rv)) { > >+ uri->GetScheme(scheme); > >+ if (scheme.LowerCaseEqualsLiteral("app") || > >+ scheme.LowerCaseEqualsLiteral("jar")) { > >+ mIsMappedArrayBuffer = true; > >+ } > >+ } > >+ } > Why we don't support mapped arraybuffer when cors is used? The CORS preflight in nsXMLHttpRequest.cpp only happens when state has XML_HTTP_REQUEST_NEED_AC_PREFLIGHT flag enabled. And the XML_HTTP_REQUEST_NEED_AC_PREFLIGHT flag could only be enabled in CheckChannelForCrossSiteRequest() when it is nsIHttpChannel. For nsIJARChannel, the CheckChannelForCrossSiteRequest() check would return error. Please see http://dxr.mozilla.org/mozilla-central/source/content/base/src/nsXMLHttpRequest.cpp#1503 1498 // This is a cross-site request 1499 mState |= XML_HTTP_REQUEST_USE_XSITE_AC; 1500 1501 // Check if we need to do a preflight request. 1502 nsCOMPtr<nsIHttpChannel> httpChannel = do_QueryInterface(aChannel); 1503 NS_ENSURE_TRUE(httpChannel, NS_ERROR_DOM_BAD_URI); 1504 1505 nsAutoCString method; 1506 httpChannel->GetRequestMethod(method); 1507 if (!mCORSUnsafeHeaders.IsEmpty() || 1508 (mUpload && mUpload->HasListeners()) || 1509 (!method.LowerCaseEqualsLiteral("get") && 1510 !method.LowerCaseEqualsLiteral("post") && 1511 !method.LowerCaseEqualsLiteral("head"))) { 1512 mState |= XML_HTTP_REQUEST_NEED_AC_PREFLIGHT; 1513 } 1514 1515 return NS_OK; 1516 } If I understand correctly, this means CORS for XHR does not support nsIJARChannel(app:// and jar://). In this bug, we only support mapped array buffer for app:// and jar:// protocols. > > > ArrayBufferBuilder::getArrayBuffer(JSContext* aCx) > > { > >+ if (mMapPtr) { > >+ JSObject* obj = JS_NewMappedArrayBufferWithContents(aCx, mLength, mMapPtr); > >+ if (!obj) { > >+ JS_ReleaseMappedArrayBufferContents(mMapPtr, mLength); > >+ } > >+ mMapPtr = nullptr; > >+ > >+ return obj; > So obj will release MappedArrayBuffer when it is finalized? > If so, add a comment. Yes, will add a comment. > > >+ // If file was added to the package as stored(uncompressed), map to the > >+ // offset of file in zip package. > This is so magical that I hope it will be documented somewhere else too. OK, will describe it more in function definition. > > > I'd like to see either cors fixed, or some explanation why it isn't > supported. It is not going to be supported by bug, as explained in the above.
Correction for typo in previous comment. s/by bug/by this bug/
Attached patch Part 3-2: XHR changes for mapped array buffer. (obsolete) (deleted) — Splinter Review
Attachment #8408848 - Attachment is obsolete: true
Attachment #8414406 - Flags: review?(bugs)
Attachment #8408852 - Attachment is obsolete: true
Attachment #8414408 - Flags: review?(bugs)
(In reply to Paul Theriault [:pauljt] from comment #63) > So I guess the main security concern here is getting this to work > out-of-process. From comment 59, it sounds like the case when the data is > stored uncompressed and we can map the file directly from the app package, > but I guess the extract case takes more work (unless there already is a > remote API for creating temp files?) > > Creating a temp file also raises the question or making sure other processes > can't modify the temp file while it is mmap'ed (i.e. setting file > permissions, assuming there is a remote API for parent to create and open > temp file for the child) The bug now targets to support chrome process, and mapping to originally uncompress files in zip package. So the previous two security concerns are not valid for this bug now(related items to be handed in follow up bug 988815 and bug 988816). For this bug, is there other security concerns we should pay attention?
Flags: needinfo?(ptheriault)
Attachment #8409593 - Flags: review?(fabrice) → review+
(In reply to Shian-Yow Wu [:shianyow] from comment #219) > The bug now targets to support chrome process, How? Where do we limit this all the chrome process?
Group: dom-core-security
Group: dom-core-security
(In reply to Olli Pettay [:smaug] from comment #220) > (In reply to Shian-Yow Wu [:shianyow] from comment #219) > > The bug now targets to support chrome process, > How? Where do we limit this all the chrome process? We didn't limit it to run only in chrome process. In some cases it can work in content process, but some not(which would fallback to non memory-mapped way). The WIP patch in part 4 tried to handle it, however it seems a little more complicated than imagined, and we shouldn't block on that. So we only claim to support chrome process in this bug, and would handle content process case in another bug(bug 988816). Take keyboard app for example, the memory-mapped array buffer can work when it runs as OOP. That's because current keyboard app belongs to CoreApp, so the zip package will be accessed directly, which means local open(). See the following code for reference. http://hg.mozilla.org/mozilla-central/file/fef1a56f0237/netwerk/protocol/app/AppProtocolHandler.cpp#l388 388 bool noRemote = (appInfo->mIsCoreApp || 389 XRE_GetProcessType() == GeckoProcessType_Default); 390 391 // In-parent and CoreApps can directly access files, so use jar:file:// 392 nsAutoCString jarSpec(noRemote ? "jar:file://" 393 : "jar:remoteopenfile://"); 394 jarSpec += NS_ConvertUTF16toUTF8(appInfo->mPath) + 395 NS_LITERAL_CSTRING("/application.zip!") + 396 fileSpec; 397 However, it would not work after content process sandboxing enabled(bug 790923), as local open() will be blocked in the future.
Attached patch Part 3-5: Limit to chrome process. (obsolete) (deleted) — Splinter Review
I am not sure whether it's best to limit or not, but this is the patch if we decide to limit to work only in chrome process.
Comment on attachment 8414408 [details] [diff] [review] Part 3-3: Test case to get array buffer by XHR with jar:// URL. I
Attachment #8414408 - Flags: review?(bugs) → review+
Comment on attachment 8414406 [details] [diff] [review] Part 3-2: XHR changes for mapped array buffer. >+ if (obj && JS_IsMappedArrayBufferObject(obj)) { >+ // Discard the response if the object was neutered. >+ if (JS_GetArrayBufferByteLength(obj) == 0) { What if the length is actually 0? Why do we return false in that case? >+ return false; >+ } >+ JS::AutoValueVector argv(aCx); >+ argv.append(response); >+ transferable = OBJECT_TO_JSVAL(JS_NewArrayObject(aCx, >+ JS::HandleValueArray::subarray(argv, 0, 1))); >+ } Why we need subarray here? Wouldn't JS::HandleValueArray Couldn't you use AutoValueArray here, and then JS::HandleValueArray ctor which take such object as a param. But it is not clear to me why mapped array buffer needs special handling here.
Attachment #8414406 - Flags: review?(bugs) → review-
(In reply to Olli Pettay [:smaug] from comment #224) > Comment on attachment 8414406 [details] [diff] [review] > Part 3-2: XHR changes for mapped array buffer. > > >+ if (obj && JS_IsMappedArrayBufferObject(obj)) { > >+ // Discard the response if the object was neutered. > >+ if (JS_GetArrayBufferByteLength(obj) == 0) { > What if the length is actually 0? Why do we return false in that case? It's not allowed to create mapped array buffer with length 0, the condition is only possible when buffer been neutered. But this reminds me that it's not an ideal way to rely on this fact to determine whether the buffer is neutered. Will add a new JS API to check it here. We have to return false here because we don't want invalid responses been delivered back. The mapped array buffer can be only transfered once, and will be neutered after the transfer. If we do XHR request in worker, after receiving the XHR response, there will be totally 3 DispatchDOMEvent() sent from main thread XHR to XHR in worker. Each event triggers a transfer, but only the first event produces a valid response. It's ok for async XHR(because the first response is got), but a problem for sync XHR condition. For detail, please refer to comment 181. > > >+ return false; > >+ } > >+ JS::AutoValueVector argv(aCx); > >+ argv.append(response); > >+ transferable = OBJECT_TO_JSVAL(JS_NewArrayObject(aCx, > >+ JS::HandleValueArray::subarray(argv, 0, 1))); > >+ } > Why we need subarray here? > Wouldn't JS::HandleValueArray > > Couldn't you use AutoValueArray here, and then JS::HandleValueArray ctor > which take such object as a param. > > But it is not clear to me why mapped array buffer needs special handling > here. Yes, it's simpler to use AutoValueArray, thank you.
Depends on: 1008092
Depends on: 1008126
No longer depends on: 1008092
Depends on: 1008512
The patch moved out cloing by transfer part from previous patch, and memory saving when passing ArrayBuffer to worker to be handed in bug 1008512.
Attachment #8414406 - Attachment is obsolete: true
Attachment #8420741 - Flags: review?(bugs)
Attachment #8420741 - Flags: review?(bugs) → review+
Depends on: 988816
Depends on: 988813
Depends on: 988815
(In reply to Shian-Yow Wu [:shianyow] from comment #219) > (In reply to Paul Theriault [:pauljt] from comment #63) > > So I guess the main security concern here is getting this to work > > out-of-process. From comment 59, it sounds like the case when the data is > > stored uncompressed and we can map the file directly from the app package, > > but I guess the extract case takes more work (unless there already is a > > remote API for creating temp files?) > > > > Creating a temp file also raises the question or making sure other processes > > can't modify the temp file while it is mmap'ed (i.e. setting file > > permissions, assuming there is a remote API for parent to create and open > > temp file for the child) > > The bug now targets to support chrome process, and mapping to originally > uncompress files in zip package. > So the previous two security concerns are not valid for this bug now(related > items to be handed in follow up bug 988815 and bug 988816). > > For this bug, is there other security concerns we should pay attention? If this is chrome process only, then I don't have any other concerns. Note that the background to comment 59, is that we will be blocking access to the open syscall in content processes eventually. So if this is going to work in a content process with sandboxing, we will need to remote the open call. But it sounds like we don't need that for now.
Flags: sec-review?(ptheriault)
Flags: sec-review+
Flags: needinfo?(ptheriault)
No longer depends on: 998243
Attachment #8390445 - Attachment is obsolete: true
Attachment #8419243 - Attachment is obsolete: true
Close it as all required parts for Firefox OS are landed. A wiki page has been created for it at: https://wiki.mozilla.org/FirefoxOS/MappedArrayBuffer Thanks for all the comments from you!
Status: NEW → RESOLVED
Closed: 10 years ago
Keywords: leave-open
Resolution: --- → FIXED
Depends on: 1154411
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: