Closed
Bug 797639
Opened 12 years ago
Closed 10 years ago
Remote Debugging Protocol needs a way to carry bulk data
Categories
(DevTools :: Debugger, defect, P1)
DevTools
Debugger
Tracking
(Not tracked)
RESOLVED
FIXED
Firefox 32
People
(Reporter: jimb, Assigned: jryans)
References
(Blocks 3 open bugs)
Details
(Whiteboard: [b2g][debugger-docs-needed])
Attachments
(6 files, 14 obsolete files)
(deleted),
patch
|
jryans
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
jryans
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
vporof
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
jryans
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
jryans
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
jryans
:
review+
|
Details | Diff | Splinter Review |
At present, all packets exchanged via the remote debugging protocol are JSON forms, but this imposes overhead that can be troublesome for some applications. The protocol should provide a low-copy mechanism to support these cases.
For example, the profiler produces blocks of data that are tens of megabytes in size. Converting that data to a JavaScript string entails one copy; the call to JSON.stringify entails another; and the way we presently write the data to the nsIOutputStream probably causes a third. On a device with restricted memory (like, say, a B2G phone), these copies are problematic.
To allow participants to exchange these large datasets with as few unnecessary copies as possible, there should be a way for client or server code to temporarily take control of the connection's nsIOutputStream directly and send binary data over the connection. The recipient's code should be given the corresponding nsIInputStream and allowed to read data directly from the socket.
The bulk data extension should allow tunneling, such as that described in bug 797627. It should allow independent actors to exchange bulk data and JSON data without coordination.
Comment 1•12 years ago
|
||
(In reply to Jim Blandy :jimb from comment #0)
> The recipient's code should be given the corresponding nsIInputStream and allowed to read data
> directly from the socket.
That doesn't need to be this way, if there's a standard for sending the data, which can be handled on the client side by the transport itself. The client, likely being a desktop Firefox, probably doesn't have any specific memory contention preventing it to just do json.
Comment 2•12 years ago
|
||
Grabbing the output stream wouldn't work in the local debugging scenario, because there are no streams involved. Fortunately, this use case doesn't apply to local debugging (we use that for desktop only, which is not memory-constrained), so we just need to make sure this functionality only applies in the remote case.
Comment 3•12 years ago
|
||
(In reply to Panos Astithas [:past] from comment #2)
> Grabbing the output stream wouldn't work in the local debugging scenario,
> because there are no streams involved. Fortunately, this use case doesn't
> apply to local debugging (we use that for desktop only, which is not
> memory-constrained), so we just need to make sure this functionality only
> applies in the remote case.
OTOH, this means an actor that implements that has to duplicate code to handle both cases. It should be possible to use a nsIPipe in the local debugging case. That being said, in the profiler case, the duplication of code will probably remain, unless we make using the DebuggerServer mandatory.
Comment 4•12 years ago
|
||
(In reply to Mike Hommey [:glandium] from comment #3)
> (In reply to Panos Astithas [:past] from comment #2)
> > Grabbing the output stream wouldn't work in the local debugging scenario,
> > because there are no streams involved. Fortunately, this use case doesn't
> > apply to local debugging (we use that for desktop only, which is not
> > memory-constrained), so we just need to make sure this functionality only
> > applies in the remote case.
>
> OTOH, this means an actor that implements that has to duplicate code to
> handle both cases. It should be possible to use a nsIPipe in the local
> debugging case. That being said, in the profiler case, the duplication of
> code will probably remain, unless we make using the DebuggerServer mandatory.
We recently replaced the nsIPipe implementation because it added too much overhead for the web console. We could bring it back for the profiler, but that will preclude us from using a single connection for every tool, which is in the plans for the not-too-distant future.
Comment 5•12 years ago
|
||
(In reply to Panos Astithas [:past] from comment #4)
> (In reply to Mike Hommey [:glandium] from comment #3)
> > (In reply to Panos Astithas [:past] from comment #2)
> > > Grabbing the output stream wouldn't work in the local debugging scenario,
> > > because there are no streams involved. Fortunately, this use case doesn't
> > > apply to local debugging (we use that for desktop only, which is not
> > > memory-constrained), so we just need to make sure this functionality only
> > > applies in the remote case.
> >
> > OTOH, this means an actor that implements that has to duplicate code to
> > handle both cases. It should be possible to use a nsIPipe in the local
> > debugging case. That being said, in the profiler case, the duplication of
> > code will probably remain, unless we make using the DebuggerServer mandatory.
>
> We recently replaced the nsIPipe implementation because it added too much
> overhead for the web console. We could bring it back for the profiler, but
> that will preclude us from using a single connection for every tool, which
> is in the plans for the not-too-distant future.
My idea was that the nsIPipe would only be used for the special mode where we except the actor to fill a nsIOutputStream.
Comment 6•12 years ago
|
||
(In reply to Mike Hommey [:glandium] from comment #5)
> (In reply to Panos Astithas [:past] from comment #4)
> > (In reply to Mike Hommey [:glandium] from comment #3)
> > > (In reply to Panos Astithas [:past] from comment #2)
> > > > Grabbing the output stream wouldn't work in the local debugging scenario,
> > > > because there are no streams involved. Fortunately, this use case doesn't
> > > > apply to local debugging (we use that for desktop only, which is not
> > > > memory-constrained), so we just need to make sure this functionality only
> > > > applies in the remote case.
> > >
> > > OTOH, this means an actor that implements that has to duplicate code to
> > > handle both cases. It should be possible to use a nsIPipe in the local
> > > debugging case. That being said, in the profiler case, the duplication of
> > > code will probably remain, unless we make using the DebuggerServer mandatory.
> >
> > We recently replaced the nsIPipe implementation because it added too much
> > overhead for the web console. We could bring it back for the profiler, but
> > that will preclude us from using a single connection for every tool, which
> > is in the plans for the not-too-distant future.
>
> My idea was that the nsIPipe would only be used for the special mode where
> we except the actor to fill a nsIOutputStream.
This is an interesting idea. It will require that we teach the transport to be replaceable, but perhaps this is not more complicated than what we will need to do in this bug anyway.
Updated•12 years ago
|
Whiteboard: [b2g]
Reporter | ||
Comment 7•12 years ago
|
||
(In reply to Mike Hommey [:glandium] from comment #1)
> That doesn't need to be this way, if there's a standard for sending the
> data, which can be handled on the client side by the transport itself. The
> client, likely being a desktop Firefox, probably doesn't have any specific
> memory contention preventing it to just do json.
In this application, the client isn't constrained, true. But this is kind of a big change to the protocol, so if we're going to bother, I'd like to try to make it work for as many bulk transfer scenarios as I can practically manage.
Reporter | ||
Comment 8•12 years ago
|
||
I would assume even our first choice outcome for the profiler will have distinct local and remote cases, to avoid the copy. (And perhaps avoid serializing at all?)
Using an nsIPipe would allow the transport to present the same interface to both remote and local participants. But it seems to me that bulk data support is only valuable in the remote case --- if you're willing to go the trouble of using bulk data, you're probably willing to have separate local and remote paths in your tool. So while it is nice in theory that we can preserve the interface in the local case, I doubt anyone will prefer to use it in practice.
It might be valuable for testing, though.
Reporter | ||
Comment 9•12 years ago
|
||
"Using an nsIPipe would allow the transport to present the same interface to both remote and local participants."
A better way to say this would be:
Using an nsIPipe would allow the transport to present the same interface whether the participants are in the same process, or in different processes.
Updated•12 years ago
|
Priority: -- → P1
Comment 10•12 years ago
|
||
Need an assignee.
FILTER ON PUMPKINS.
Comment 11•12 years ago
|
||
One thought to move this forward, if the debug server could send a packet to the client telling it will open a new stream on port X, the client could pick up that stream for the retrieval of bulk data and inform the server when it's done. This can happen asynchronously so other clients can continue using the server.
Comment 12•12 years ago
|
||
Jim has provided a spec for this:
https://wiki.mozilla.org/Remote_Debugging_Protocol_Stream_Transport
Comment 13•12 years ago
|
||
still no assignee.
FILTER ON BLACKEAGLE.
Reporter | ||
Updated•12 years ago
|
Assignee: nobody → jimb
Reporter | ||
Comment 15•11 years ago
|
||
It seems that the transport wiki page went spammy without me noticing, and got deleted. I've asked for it to be restored.
Reporter | ||
Comment 16•11 years ago
|
||
The draft docs for bulk transport are perhaps more reliably (if less legibly) available here:
https://github.com/jimblandy/DebuggerDocs/compare/bulk-transport
Note that this needs to be reconciled with our subprocess support: the bulk spec says that actor names *may not* contain colons, whereas the subprocess routing requires them to contain colons. The latter will be the easier to change, as it's entirely a server-side detail.
Comment 17•11 years ago
|
||
On the App Manager side, this is now critic. See bug 797639.
Comment 18•11 years ago
|
||
I just talked to Paul, we think the absolutely critical bits for v1 can be reduced to:
Storage Types:
* DataStores ( for FxOS & Apps developers )
* IndexedDB
* Localstorage
- Data will be read-only.
- We will need some sort of rudimentary search / filtering.
V2 then looks like this:
* Cookies
* Appcache
* Read / write access
* Search improvements
* follow-ups based on Feedback
Comment 19•11 years ago
|
||
Jim, are you still working on that? We need to that have an efficient way to send apps to the device. And as Jeff said, we'll probably need that for bug 926449 as well.
Flags: needinfo?(jimb)
Comment 20•11 years ago
|
||
(In reply to Jeff Griffiths (:canuckistani) from comment #18)
...wrong bug, sorry.
Reporter | ||
Comment 21•11 years ago
|
||
I am not working on this at present; I did not see comment #17 go by. I will begin work on it, or make sure someone does.
Flags: needinfo?(jimb)
Assignee | ||
Comment 22•11 years ago
|
||
I am investigating this as a possible route to resolve slow app transfers (bug 924574).
Assignee: jimb → jryans
Status: NEW → ASSIGNED
Assignee | ||
Comment 23•11 years ago
|
||
Jim, just realized this was assigned to you previously... If you had already made progress, please let me know! :)
Assignee | ||
Comment 24•11 years ago
|
||
I'm working on cleaning up my prototype to get things in a reviewable state.
Jim, do you think we should add a "type" field to the bulk data packet? It seems natural to me, so that it can be passed to a specific method of an actor, like a regular packet.
Also, do we need any kind of reply mechanism for bulk data? Currently I don't have one implemented, but I wasn't sure if it should be there for parity with regular packets. I would think most "replies" would be small result codes, so it's probably fine to let people just send a separate regular packet.
Flags: needinfo?(jimb)
Reporter | ||
Comment 25•11 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #24)
> Jim, do you think we should add a "type" field to the bulk data packet? It
> seems natural to me, so that it can be passed to a specific method of an
> actor, like a regular packet.
Sure, that would be reasonable.
> Also, do we need any kind of reply mechanism for bulk data? Currently I
> don't have one implemented, but I wasn't sure if it should be there for
> parity with regular packets. I would think most "replies" would be small
> result codes, so it's probably fine to let people just send a separate
> regular packet.
Since bulk data is really a transport-level thing, it should be readily available to both client and server as needed. So, is there any reason to add a special case? Can't the "reply" to bulk data either be more bulk data, or an ordinary packet, whichever makes more sense given the meanings of the packets?
Flags: needinfo?(jimb)
Assignee | ||
Updated•11 years ago
|
Whiteboard: [b2g] → [b2g][debugger-docs-needed]
Assignee | ||
Comment 26•11 years ago
|
||
This patch adds bulk data support in the transport layer only.
To implement this, much of the transport layer was reworked and modularized into several layers: reading and writing data on the wire is handled by Packet objects that the Transport manages. The packets are composed of multiple Segments that finally read and write various data fields.
To achieve a clean stream hand off to another application, I added the concept of a "stream copier" that simplifies the task of getting a fixed amount of data from one stream to another, without closing the streams automatically, so that the transport can remain alive.
The main unresolved issue that I see is that I am unsure how to best handle error conditions with bulk packets. If you intend to send a bulk packet of length X, but it can't be delivered or the destination reads less than exactly X bytes, then the entire transport is left in an unknown state. In some cases, it can be known how much data remains, so possibly it can be read into the equivalent of /dev/null. So, there are still some TODOs in this and the next patch about this issue.
Attachment #8359540 -
Flags: review?(jimb)
Assignee | ||
Comment 27•11 years ago
|
||
This part adds bulk support in the client and server.
The client API for a bulk request is more complex than the existing "simple" JSON API, since a bulk send is a two step process, and there are several types of possible replies. So, an event based request object is used instead.
You can reply to a bulk request with more bulk data or just send back JSON, as discussed above in comment 25.
As with part 1, there are several TODOs about how error conditions with bulk packets should be handled.
Attachment #8359543 -
Flags: review?(jimb)
Assignee | ||
Comment 28•11 years ago
|
||
As a result of various refactoring changes in part 1, the Marionette server was updated to:
* use the new path to transport.js
* give the transport access to Cr
and the client was updated:
* to be more careful about packet parsing
* to expect less data to be received when the transport is live (a JSON packet is no longer written all at once)
I have not yet tested the Marionette JS client, but plan to do so soon.
Attachment #8359546 -
Flags: review?(jgriffin)
Assignee | ||
Comment 29•11 years ago
|
||
Try run with parts 1 - 3 combined: https://tbpl.mozilla.org/?tree=Try&rev=71898a29709b
The B2G debug emulator issues seem unrelated, as other try builds appear to be affected by them too.
Comment 30•11 years ago
|
||
Comment on attachment 8359546 [details] [diff] [review]
Part 3: Fix Marionette client / server loading and parsing
Review of attachment 8359546 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks!
Attachment #8359546 -
Flags: review?(jgriffin) → review+
Assignee | ||
Comment 31•11 years ago
|
||
PR with updates to the transport spec:
https://github.com/jimblandy/DebuggerDocs/pull/20
Assignee | ||
Comment 33•11 years ago
|
||
Jim told me over IRC that he plans to review this end of this week / early next week.
Flags: needinfo?(jimb)
Reporter | ||
Comment 34•11 years ago
|
||
Hi --- I'm finishing up review on bug 332176; this is next on my list.
Reporter | ||
Comment 35•11 years ago
|
||
Comment on attachment 8359540 [details] [diff] [review]
Part 1: Bulk data support in the transport layer
Review of attachment 8359540 [details] [diff] [review]:
-----------------------------------------------------------------
I really like this idea, in general: a bulk packet is represented (in effect) by code that can write the bytes as needed.
I have some initial comments on the API.
::: toolkit/devtools/transport/transport.js
@@ +129,5 @@
> + * and will not close the stream when writing is complete
> + * * writer: A helper object for getting your data onto the stream
> + * that meets the requirements above via
> + * |writer.copyFrom()|
> + * * done: Call this function when writing is complete
I wonder whether returning a promise is really the right thing here, instead of providing a callback function in the header.
The reasons to use a promise instead of a callback are that 1) a promise is a value that can be passed around and stored; 2) a promise can have any number of 'then' handlers waiting for its value; 3) promises have a well-defined error handling framework.
For 1), since the code that's going to produce the bits is probably the same code that knows the actor, type, and length, it seems like the "passed around and stored" advantage isn't very useful here; I'll bet that the callers would be just as happy to pass a callback to startBulkSend as to receive a promise from it.
For 2), isn't it a *bug* if more than one party wants to be notified when these values are ready?
If I'm reading right, 3) doesn't apply either, because error handling needs to get attached to the promise returned by the byte writer's 'then' call --- which this function can't do.
Instead, |header| should include a callback function that expects two arguments:
- The stream to write to; and
- A promise of the result of writing the bytes. The callback function must fulfill or reject this promise, depending on how the byte writing goes.
I don't understand 'writer' yet; but it sounds like if the above API is difficult to meet, we could provide helper functions.
@@ +180,5 @@
> + if (!this._outgoingEnabled || this._outgoing.length === 0) {
> + return;
> + }
> +
> + // If the top of the segment queue has nothing more to send, remove it.
Isn't this checking the top of the *packet* queue? _currentOutgoing is a packet.
Assignee | ||
Comment 36•11 years ago
|
||
(In reply to Jim Blandy :jimb from comment #35)
> I have some initial comments on the API.
The API for this work was probably the hardest part of it all... I considered a number of different options throughout the process, and I am sure there's room to refine it, so I am glad we'll discuss any shortcomings in detail! :)
> ::: toolkit/devtools/transport/transport.js
> @@ +129,5 @@
> > + * and will not close the stream when writing is complete
> > + * * writer: A helper object for getting your data onto the stream
> > + * that meets the requirements above via
> > + * |writer.copyFrom()|
> > + * * done: Call this function when writing is complete
>
> I wonder whether returning a promise is really the right thing here, instead
> of providing a callback function in the header.
In general, I've always seen promises as strictly better than passing a callback for APIs of the "do X, then Y whenever X is eventually done" format, which is what happens here.
> The reasons to use a promise instead of a callback are that 1) a promise is
> a value that can be passed around and stored; 2) a promise can have any
> number of 'then' handlers waiting for its value; 3) promises have a
> well-defined error handling framework.
It also gives the receiver of the promise the ability to implement more complex flows like "do N things in parallel" and join on all of them completing, etc. Higher-level flows like this can be created easily via the API that a promise provides.
> For 1), since the code that's going to produce the bits is probably the same
> code that knows the actor, type, and length, it seems like the "passed
> around and stored" advantage isn't very useful here; I'll bet that the
> callers would be just as happy to pass a callback to startBulkSend as to
> receive a promise from it.
In the patch for part 2, I actually have separated the caller of startBulkSend from the work that follows it in the case that an actor wants to send a bulk reply back to the client. It gets a helper "reply" function that sets up the header a bit first.
But yes, a callback could be passed. Though I guess stylistically, the idea of a promise and its API still appeals to me.
> For 2), isn't it a *bug* if more than one party wants to be notified when
> these values are ready?
Yes, I agree, it would be pretty hard for multiple parties to do something sensible with this notification. However, I don't know that a promise necessarily makes the claim that "this API is logically useful to many consumers" directly. If you use a callback instead, you can still save its arguments, pass them to N other parties, etc. A promise doesn't relieve the caller from having to follow the correctness requirements of the thing you are calling.
> If I'm reading right, 3) doesn't apply either, because error handling needs
> to get attached to the promise returned by the byte writer's 'then' call ---
> which this function can't do.
There could also be errors when writing out the bulk header that could reject this current promise (though, I don't believe this happens in the current patch, so that's something to add).
I definitely agree that we'd like to also capture the errors during writing bytes. Below I purpose something of a mix of the existing and your suggestion.
Generally though, error handling is trickier in the bulk case. You'll see some TODOs still in the patch, because it wasn't obvious what the best resolution of an error would be. In many cases once we've handed off the stream, we have no clue what may or may not have been written, and the entire transport is then in an unknown state and can't be used anymore.
> Instead, |header| should include a callback function that expects two
> arguments:
>
> - The stream to write to; and
>
> - A promise of the result of writing the bytes. The callback function must
> fulfill or reject this promise, depending on how the byte writing goes.
Using a promise instead of |done| seems like a great idea. I guess it didn't occur to me, since it's a bit of an inversion of the typical use of promises, since we would create a deferred, and hand that off for someone else to resolve, whereas you're typically handing off the promise.
What if |startBulkSend| returns a promise as it does currently, but then in place of |done| you get |deferred|, which you resolve / reject after writing bytes?
> I don't understand 'writer' yet; but it sounds like if the above API is
> difficult to meet, we could provide helper functions.
|writer| exists to provider callers help in getting exactly X bytes onto the stream (while leaving it open after), which is not obviously simple with the nsI*Stream APIs. See comments in stream-copier.js for more reasoning here. I'd love for it not to be needed at all, but at the very least it seemed like it would require a good deal of platform work to make streams fully possible to implement in JS.
> @@ +180,5 @@
> > + if (!this._outgoingEnabled || this._outgoing.length === 0) {
> > + return;
> > + }
> > +
> > + // If the top of the segment queue has nothing more to send, remove it.
>
> Isn't this checking the top of the *packet* queue? _currentOutgoing is a
> packet.
Yes, you're right. Will update the comment.
Reporter | ||
Comment 37•11 years ago
|
||
Does bulk transport really introduce any new problems with respect to errors during sending or receiving? That is, even with just the extant JSON packet format, if there's an error while we're in the midst of writing a packet, we have no idea what state the transport is in any more; the JSON framing has a byte count, too.
It seems to me that, for both JSON and bulk packets, transmission or reception errors should be logged (as by makeInfallible) and the transport object should be poisoned, so that subsequent attempts to send throw exceptions.
Reporter | ||
Comment 38•11 years ago
|
||
Comment on attachment 8359540 [details] [diff] [review]
Part 1: Bulk data support in the transport layer
Review of attachment 8359540 [details] [diff] [review]:
-----------------------------------------------------------------
Some more comments:
::: toolkit/devtools/transport/stream-copier.js
@@ +25,5 @@
> + *
> + * @param length Integer
> + * The amount of data that needs to be copied.
> + */
> +function StreamCopier(length) {
Did you consider nsIAsyncStreamCopier? It seems to do almost exactly what this does, and it takes flags to determine whether it should close the source and sink.
If that doesn't work, it does seem to me that this class could be a single function:
/**
* Copy exactly |length| bytes from |input| to |output|. On success,
* resolve the deferred |onDone| to |true|. On failure, reject |onDone|
* with the Error object thrown.
*
* If |output| is not buffered, we create a buffered stream for it.
*/
function copyStream (input, output, length, onDone)
{
...;
}
The current pattern:
let writer = new StreamCopier(this.length);
writer.output = blah;
... { writer: writer }
would become:
... {
writer: (input) => copyStream(input, output, this.length, onDone),
...
}
Making this a function would also unify the two places we buffer the output stream. And using a deferred gives an idiomatic way to report both success and failure.
::: toolkit/devtools/transport/transport.js
@@ +405,5 @@
> + * close the stream when writing is complete
> + * * reader: A helper object for getting your data out of the stream
> + * that meets the requirements above via |reader.copyTo()|
> + * * done: Call this function when reading is complete
> + */
I think it's better to put documentation like this up in the description of DebuggerTransport, and then just cite that here. Ideally, the top comment should give you everything you need to know to use the class.
@@ +413,5 @@
> + // away when the transport is closed).
> + if (this.hooks && this.hooks.onBulkPacket) {
> + this.hooks.onBulkPacket(...args);
> + }
> + // TODO: else... error?
Yeah, I think both this and _onJSONObjectReady should throw if they get a packet and there's no hook ready for it. The transport should not still be having its onFooStreamReady methods called after it's been closed, right?
Perhaps it would be better to replace that entire 'if (this.hooks...) { ... }', here and in _onJSONObjectReady, with simply:
this.hooks.onBulkPacket(...);
That is, just call the hook unconditionally. Then you'll naturally get an error if it has been cleared, or if the user didn't provide the necessary method on the hook.
Reporter | ||
Comment 39•11 years ago
|
||
(In reply to Jim Blandy :jimb from comment #38)
> Did you consider nsIAsyncStreamCopier? It seems to do almost exactly what
> this does, and it takes flags to determine whether it should close the
> source and sink.
AHH WAIT WAIT WAIT no fixed length. Got it now...
Assignee | ||
Comment 40•11 years ago
|
||
(In reply to Jim Blandy :jimb from comment #37)
> Does bulk transport really introduce any new problems with respect to errors
> during sending or receiving? That is, even with just the extant JSON packet
> format, if there's an error while we're in the midst of writing a packet, we
> have no idea what state the transport is in any more; the JSON framing has a
> byte count, too.
>
> It seems to me that, for both JSON and bulk packets, transmission or
> reception errors should be logged (as by makeInfallible) and the transport
> object should be poisoned, so that subsequent attempts to send throw
> exceptions.
It's true that both formats have a byte count, and writing something other that that amount of bytes will mess up the whole transport.
The main difference with the bulk format is that the transport is no longer reading / writing most of the data itself, so it is reliant on the application level above to properly signal error conditions so that it knows the transport is now invalid. So, if the application level forgets to say anything / doesn't notice that it did something wrong, the transport will happily continue using a stream that is in an invalid state. Usually it blows up soon afterwards, but it's a little scary since it could make bug reports a bit trickier to correlate back to their root cause.
(In reply to Jim Blandy :jimb from comment #39)
> (In reply to Jim Blandy :jimb from comment #38)
> > Did you consider nsIAsyncStreamCopier? It seems to do almost exactly what
> > this does, and it takes flags to determine whether it should close the
> > source and sink.
>
> AHH WAIT WAIT WAIT no fixed length. Got it now...
Yeah, I was really hoping something like would have worked, but the combination of fixed length and controlling when you close seems to not be available in the platform (at least I couldn't find it).
It would be really neat if nsI*Streams could be fully implemented in JS, as streams can be in Node.js, but that's a different bug.
(In reply to Jim Blandy :jimb from comment #38)
> ::: toolkit/devtools/transport/stream-copier.js
> @@ +25,5 @@
> > + *
> > + * @param length Integer
> > + * The amount of data that needs to be copied.
> > + */
> > +function StreamCopier(length) {
> If that doesn't work, it does seem to me that this class could be a single
> function:
I agree, I'll make this change.
> ::: toolkit/devtools/transport/transport.js
> @@ +405,5 @@
> > + * close the stream when writing is complete
> > + * * reader: A helper object for getting your data out of the stream
> > + * that meets the requirements above via |reader.copyTo()|
> > + * * done: Call this function when reading is complete
> > + */
>
> I think it's better to put documentation like this up in the description of
> DebuggerTransport, and then just cite that here. Ideally, the top comment
> should give you everything you need to know to use the class.
Okay, will do.
> @@ +413,5 @@
> > + // away when the transport is closed).
> > + if (this.hooks && this.hooks.onBulkPacket) {
> > + this.hooks.onBulkPacket(...args);
> > + }
> > + // TODO: else... error?
>
> Yeah, I think both this and _onJSONObjectReady should throw if they get a
> packet and there's no hook ready for it. The transport should not still be
> having its onFooStreamReady methods called after it's been closed, right?
>
> Perhaps it would be better to replace that entire 'if (this.hooks...) { ...
> }', here and in _onJSONObjectReady, with simply:
>
> this.hooks.onBulkPacket(...);
>
> That is, just call the hook unconditionally. Then you'll naturally get an
> error if it has been cleared, or if the user didn't provide the necessary
> method on the hook.
I recall there being a case (at least in the current version of the transport without these patches) where the hooks could be removed when they existed before in the following sequence of events:
* Packet is received, dispatch a future call to this.hooks
* The transport is closed, clearing the hooks
* Dispatched function finally runs, but now the hooks are gone
Maybe it's better to throw anyway, so that such errors are not lost, in case some of them are real problems...?
Comment 41•11 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #40)
> Maybe it's better to throw anyway, so that such errors are not lost, in case
> some of them are real problems...?
Seeing those errors in the logs is not good as it sends us tilting at windmills when debugging.
Reporter | ||
Comment 42•11 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #40)
> The main difference with the bulk format is that the transport is no longer
> reading / writing most of the data itself, so it is reliant on the
> application level above to properly signal error conditions so that it knows
> the transport is now invalid. So, if the application level forgets to say
> anything / doesn't notice that it did something wrong, the transport will
> happily continue using a stream that is in an invalid state. Usually it
> blows up soon afterwards, but it's a little scary since it could make bug
> reports a bit trickier to correlate back to their root cause.
Yes, that's true --- we are putting responsibility for stream synchronization on code outside the transport implementation itself, and that's a new source of problems. I think your stream-copier or my suggested stream copy function are both good ways to re-localize some of that responsibility. My hope is that bulk data transfers will not be widely used, so this won't be a big distraction.
> I recall there being a case (at least in the current version of the
> transport without these patches) where the hooks could be removed when they
> existed before in the following sequence of events:
>
> * Packet is received, dispatch a future call to this.hooks
> * The transport is closed, clearing the hooks
> * Dispatched function finally runs, but now the hooks are gone
If so, I think that's the bug, then. It's crazy to run hooks when the underlying transport is dead.
Reporter | ||
Comment 43•11 years ago
|
||
(In reply to Jim Blandy :jimb from comment #42)
> If so, I think that's the bug, then. It's crazy to run hooks when the
> underlying transport is dead.
I meant to say: It's crazy to continue to run dispatched packet handlers when the transport has been closed.
Reporter | ||
Comment 44•11 years ago
|
||
Comment on attachment 8359540 [details] [diff] [review]
Part 1: Bulk data support in the transport layer
Review of attachment 8359540 [details] [diff] [review]:
-----------------------------------------------------------------
::: toolkit/devtools/transport/transport.js
@@ +139,5 @@
> + let deferred = defer();
> +
> + packet.on("pause-outgoing", () => this.pauseOutgoing());
> + packet.on("resume-outgoing", () => this.resumeOutgoing());
> + packet.once("bulk-write-ready", () => {
For this packet-transport interaction, |sdk/event/core| seems pretty heavy. It's hard for me to anticipate other listeners for these events, so the decoupled broadcast characteristics that |sdk/event/core| is designed to support don't seem valuable here. Support more complex interactions is something we should think through when we have concrete examples.
I'd rather simply see Transport offer |pauseOutgoing| and |resumeOutgoing| as an interface packets can use directly as needed. (The packet constructor is already passed - although doesn't currently use? - the transport.) I *think* the changes previously suggested to the packet-writing callback - specifically, that it expect a deferred that it must resolve with the result of writing the packet - will absorb "bulk-write-ready", too.
Reporter | ||
Comment 45•11 years ago
|
||
Regarding the use of promises for the bulk packet writer, and event management for interaction between packet and transport:
To be clear: I'm not concerned about the efficiency of these facilities, or the complexity of their implementations, or things like that. Bulk packets will be unusual; and that's all well-tested code. Hopefully it even JITs well.
What worries me is that they're designed to support general, open-ended interactions and compositions --- but those are not actually desirable at this layer of the system. When things go wrong, we hang; or the stream becomes desynchronized; etc. etc.
It's true that a packet-writing callback, or a pause/unpause interface on the transport, exposed for packets to use, can be abused to implement surprising behaviors just as much as a promise or an event could. But my feeling here is that the *connotations* of these interfaces --- the way they invite themselves to be used, based on tradition, the way a doorknob suggests grasping it and turning it --- are very much wrong for this situation.
At this layer in the stack I think we want a restricted set of possibilities that is easy to audit.
Reporter | ||
Comment 46•11 years ago
|
||
I can continue to review other parts of the patch if you'd find it useful, but it would also be good to look over the revisions.
Reporter | ||
Updated•11 years ago
|
Attachment #8359540 -
Flags: review?(jimb)
Assignee | ||
Comment 47•11 years ago
|
||
(In reply to Jim Blandy :jimb from comment #44)
> Comment on attachment 8359540 [details] [diff] [review]
> Part 1: Bulk data support in the transport layer
>
> Review of attachment 8359540 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: toolkit/devtools/transport/transport.js
> @@ +139,5 @@
> > + let deferred = defer();
> > +
> > + packet.on("pause-outgoing", () => this.pauseOutgoing());
> > + packet.on("resume-outgoing", () => this.resumeOutgoing());
> > + packet.once("bulk-write-ready", () => {
>
> For this packet-transport interaction, |sdk/event/core| seems pretty heavy.
> It's hard for me to anticipate other listeners for these events, so the
> decoupled broadcast characteristics that |sdk/event/core| is designed to
> support don't seem valuable here. Support more complex interactions is
> something we should think through when we have concrete examples.
>
> I'd rather simply see Transport offer |pauseOutgoing| and |resumeOutgoing|
> as an interface packets can use directly as needed. (The packet constructor
> is already passed - although doesn't currently use? - the transport.) I
> *think* the changes previously suggested to the packet-writing callback -
> specifically, that it expect a deferred that it must resolve with the result
> of writing the packet - will absorb "bulk-write-ready", too.
Somewhat ironically... this is what I did initially. ;) It's mostly a stylistic change, as I just liked things being less coupled, but the argument that it is too "heavy" makes sense, especially because these things are likely not that interesting to other consumers anyway.
I'll change it back as you're suggesting.
Assignee | ||
Comment 48•11 years ago
|
||
(In reply to Jim Blandy :jimb from comment #45)
> Regarding the use of promises for the bulk packet writer, and event
> management for interaction between packet and transport:
>
> To be clear: I'm not concerned about the efficiency of these facilities, or
> the complexity of their implementations, or things like that. Bulk packets
> will be unusual; and that's all well-tested code. Hopefully it even JITs
> well.
>
> What worries me is that they're designed to support general, open-ended
> interactions and compositions --- but those are not actually desirable at
> this layer of the system. When things go wrong, we hang; or the stream
> becomes desynchronized; etc. etc.
>
> It's true that a packet-writing callback, or a pause/unpause interface on
> the transport, exposed for packets to use, can be abused to implement
> surprising behaviors just as much as a promise or an event could. But my
> feeling here is that the *connotations* of these interfaces --- the way they
> invite themselves to be used, based on tradition, the way a doorknob
> suggests grasping it and turning it --- are very much wrong for this
> situation.
>
> At this layer in the stack I think we want a restricted set of possibilities
> that is easy to audit.
For the case of events between packet and transport, your arguments make sense to me. As I replied above, I'll remove the events here. Though, more use of events does appear in the client layer (in the part 2 patch), so we'll see what you think when you review that. (I think it does make more sense there, but we'll see.)
For the promises, I still feel the API that a promise offers is just so much nicer that it outweighs these concerns in my mind. Every time I see callback-passing style, I think "this would be better with a promise". I believe the main concern you see there is that someone would assume multiple people could await the "ready for writing" state, but:
* I've already said it, but I still think if someone logically thinks about the API they are calling it becomes clear that this wouldn't be a great choice (as you say yourself, promises are not what enables this to be possible, as it could be done with a callback too)
* You've suggested the "ready for writing" callback receive a deferred to tell the transport when writing is done, which is a great addition. For parallelism with the rest of the API, I think it's best to keep the initial promise too.
* There are promises all over the place in the Dev Tools codebase, even at low layers, like the results returned by an actor. I believe they are well understood by our team, and they have clear error-handling story, so they should not introduce instability to this area of the code.
* This is only the transport layer API, so most people are not going to be working at this level anyway. I am much more concerned about exactly how intuitive the client / server layer API (in part 2) is, since that's what will show up all over the rest of the tools. (The transport should still be intuitive too! I just don't feel as worried I guess.)
So, I think for now I will leave that promise in place. If you look at the revised patch and still hate it... well, then I'll have to take it out I guess. :)
Reporter | ||
Comment 49•11 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #48)
> So, I think for now I will leave that promise in place. If you look at the
> revised patch and still hate it... well, then I'll have to take it out I
> guess. :)
Sure, let's look the revision over.
Assignee | ||
Comment 50•11 years ago
|
||
Integrates all part 1 review changes so far, including:
* Packets and segments talk to the transport directly, no more events
* Transport hands over a |done| deferred for readers and writers to use to signal completion, instead of a |done| callback
* The stream copier is streamlined into one function, somewhat like what you purposed, but internally the copier still uses an object to ease responding to the async stream callbacks
* Moved bulk read docs to the top of the transport
* Added an |active| field to the transport which is tested when trying to call hooks to know if the transport is still alive
Let me know if you'd like me to attach an interdiff.
Attachment #8359540 -
Attachment is obsolete: true
Attachment #8372773 -
Flags: review?(jimb)
Assignee | ||
Comment 51•11 years ago
|
||
Updated to match API changes in the revised part 1.
Attachment #8359543 -
Attachment is obsolete: true
Attachment #8359543 -
Flags: review?(jimb)
Attachment #8372774 -
Flags: review?(jimb)
Assignee | ||
Comment 52•11 years ago
|
||
Rebased against latest fx-team. Carrying over r+ from attachment 8359546 [details] [diff] [review].
Attachment #8359546 -
Attachment is obsolete: true
Attachment #8375129 -
Flags: review+
Reporter | ||
Comment 54•11 years ago
|
||
Comment on attachment 8372773 [details] [diff] [review]
Part 1 (v2): Bulk data support in the transport layer
Review of attachment 8372773 [details] [diff] [review]:
-----------------------------------------------------------------
Still reviewing. Some comments as I go:
::: toolkit/devtools/transport/segments.js
@@ +4,5 @@
> +
> +/**
> + * Segments contain read / write functionality for low-level parts of a packet
> + * in the debugging protocol. The packets use various combinations of these to
> + * support parse and serializing data in a given format.
"parsing"?
@@ +44,5 @@
> +};
> +
> +this.StringSegment = StringSegment;
> +
> +function DelimitedSegment(delimeter) {
sp: "delimiter"
::: toolkit/devtools/transport/stream-copier.js
@@ +81,5 @@
> + bytesCopied = this.output.writeFrom(this.input, amountToCopy);
> + } catch(e if e.result == Cr.NS_BASE_STREAM_WOULD_BLOCK) {
> + // Some data may still have been copied, let's try to find out.
> + bytesCopied = bytesAvailable - this.input.available();
> + this.amountLeft -= bytesCopied;
Doesn't this assume that this.input can't acquire more readable data since the last time we called this.input.available? Can that ever happen? If it did, then bytesCopied would be an underestimate, or possibly even negative.
@@ +85,5 @@
> + this.amountLeft -= bytesCopied;
> + dumpv(() => "Copied: " + bytesCopied + ", Left: " + this.amountLeft);
> + dumpv(() => "Waiting for output stream");
> + let threadManager = Cc["@mozilla.org/thread-manager;1"].getService();
> + this.output.asyncWait(this, 0, 0, threadManager.currentThread);
If 'Services' is in scope, can we just say Services.tm.currentThread?
@@ +94,5 @@
> + dumpv(() => "Copied: " + bytesCopied + ", Left: " + this.amountLeft);
> +
> + if (this.amountLeft === 0) {
> + dumpv(() => "Copy done!");
> + this.output.flush();
It seems to me we should only flush if we created the buffered-output-stream ourselves. If someone else created it, then wouldn't this force it to be flushed earlier than it needs to be?
::: toolkit/devtools/transport/transport.js
@@ +18,5 @@
> +/**
> + * A verbose logger for low-level transport tracing.
> + * @param function msgFactory
> + * A function that returns the message to log (so that arguments aren't
> + * computed unless they'll be used)
I don't think this optimization actually wins. It's true that it allows us to avoid computing strings we will almost never use, but it trades that for other costs:
1) Each closure you pass to dumpv is a fresh object being allocated even when logging is off. So you're still creating unused objects. (Since we don't apply IonMonkey to chrome code, we won't inline the whole pile away.)
2) When the closure actually refers to variables in its environment, it forces those variables to be placed in a heap-allocated Call object, created every time their scope is entered. Thus, since DebuggerTransport.prototype._processIncoming now says:
dumpv(() => "data available: " + count);
each call to _processIncoming will allocate a Call object on the heap to hold 'count'. It looks like that function doesn't have any other closures, so the dumpv closure really will be sole reason for allocating the Call object.
One alternative might be to take *either* a string *or* a closure.
@@ +65,5 @@
> *
> + * - onBulkPacket(packet) - called when we have switched to bulk packet
> + * receiving mode. |packet| is an object containing:
> + * * actor: Actor that will receive the packet
> + * * type: Method of the actor that should be called on receipt
Should these say "Name of actor" and "Name of method"?
@@ +69,5 @@
> + * * type: Method of the actor that should be called on receipt
> + * * length: Size of the data to be read
> + * * stream: This input stream should only be used directly if you can ensure
> + * that you will read exactly |length| bytes and will not close the
> + * stream when writing is complete
s/writing/reading/
Assignee | ||
Comment 55•11 years ago
|
||
Jim, not sure if you've looked at the tests yet, but you might notice some are marked as "run-sequentially". I've fixed this issue locally, so no need to wonder about it (was related to file name collisions).
Reporter | ||
Comment 56•11 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #55)
> Jim, not sure if you've looked at the tests yet, but you might notice some
> are marked as "run-sequentially". I've fixed this issue locally, so no need
> to wonder about it (was related to file name collisions).
Excellent!
Reporter | ||
Comment 57•11 years ago
|
||
Comment on attachment 8372773 [details] [diff] [review]
Part 1 (v2): Bulk data support in the transport layer
Review of attachment 8372773 [details] [diff] [review]:
-----------------------------------------------------------------
I think my big concern about this patch is that the packet / segment division doesn't seem
to earn its keep. Incoming packet header parsing gets spread across five functions in
three files:
- transport.js's _processIncoming pulls four bytes off the stream, and passes them to _createIncoming.
- _createIncoming calls packetFactory.fromType to create an appropriate Packet instance.
- packet.js's packetFactory.fromType does a two-case switch on the type string.
- packet.js's JSONPacket and segment.js's DelimitedSegment need readInitial methods to
cope with the possibility of having over-read in _processIncoming.
This is too much code to pay for parsing either "LENGTH:" or "bulk ACTOR TYPE LENGTH:". We
need to get this job done with fewer interacting parts.
I would expect packet header parsing to work like this:
- Get the full block of text available on the stream, and append it to whatever we've
accumulated so far.
- If it doesn't contain a colon, just wait for more input. But if it's gotten too big for
any plausible header, drop the connection and complain.
- Do some regexp matches against the text before the colon to determine what form of
packet it is. Use the regexp match data to extract parameters, and call
packet-format-specific handler functions, hard-coded.
This should be tens of lines of code at most. The patch tries to delegate parsing to
the packets, which then delegate each part of the packet to a segment --- but I actually
think it's an advantage to have a single function that knows all the packet formats we
support, and how to parse their headers. While the JSON / bulk dichotomy is
well-motivated, a free-wheeling bazaar of packet formats is not something we want to
design for; it's good for people to feel constrained to use one of the existing forms.
In other words: there's a simplicity win available to us here that we should not abandon
in favor of generality that we would rather people not use in the first place.
So, in concrete terms:
- The DebuggerTransport interface seems sane, and should stay as it is.
- If I'm reading right, the onBulkPacket handler and startBulkSend's returned promise's
'done' promise give DebuggerTransport all the information it needs to call
{pause,resume}{Incoming,Outgoing} itself. So those don't need to be public at all.
- Reading one byte at a time via readInputToStream is painful. Instead of carefully
avoiding reading past the delimiter, it would be better for bulk packet readers to
accept a string containing any bytes the header parser read beyond the colon; and
possibly return a string containing any bytes that were read past the packet's end.
- Packet headers should be parsed directly in _processIncoming, as described above.
- Then, specific Packet subclasses can restrict themselves to dealing with packet bodies.
::: toolkit/devtools/transport/packets.js
@@ +113,5 @@
> + set object(object) {
> + this._object = object;
> + let json = JSON.stringify(object);
> + json = unicodeConverter.ConvertFromUnicode(json);
> + this._lengthSegment.data = json.length;
This is a minor thing, but:
Am I understanding correctly that, even though this sets the 'data' property of the DelimitedSegment instance to a number, it will be converted into a string by the concatenation with the delimiter in DelimitedSegment.prototype.write? This feels delicate, since DelimitedSegment doesn't seem to promise to behave this way; the comments in DS.p.write seem to suggest that concatenating the delimiter is just an implementation detail.
Attachment #8372773 -
Flags: review?(jimb)
Reporter | ||
Comment 58•11 years ago
|
||
One problem with establishing the colon as the header delimiter for all packet framing types, as I suggest above, is that it requires that headers not contain colons in their data fields. Unfortunately, we are, in fact, using colons in actor names for subprocess packet forwarding; see:
https://hg.mozilla.org/mozilla-central/file/63249d900cfb/toolkit/devtools/server/main.js#l640
That forwarding mechanism, however, has always been an implementation detail, not a normative part of the protocol. So I think the forwarding code should change to use some other character. I can take care of this.
Reporter | ||
Comment 59•11 years ago
|
||
Comment on attachment 8372774 [details] [diff] [review]
Part 2 (v2): Bulk data support in client and server
Review of attachment 8372774 [details] [diff] [review]:
-----------------------------------------------------------------
Still reviewing; some comments so far:
::: toolkit/devtools/client/dbg-client.jsm
@@ +610,5 @@
> this._sendRequests();
> },
>
> /**
> + * Transmit streaming data via a bulk request.
If I'm reading this right, we're only prepared for bulk replies to bulk requests, not bulk replies to JSON requests, is that correct?
It seems to me that one might very well prefer to send a JSON request that elicits a bulk response, say:
{ to: profiler, type: "retrieveProfile",
startTime: 1234, endTime: 5678 }
Bulk packet are inevitably harder to work with than JSON packets, so we certainly don't want restrictions that make JSON packets hard to use.
Really, in principle, any request or reply should be able to use any packet format. A packet in an unexpected format should be handled similarly to a packet with an unexpected type.
Naturally, the callbacks that all extant callers pass to DebuggerClient.prototype.request are only prepared to handle JSON, so bulk-ready callers need a different way to send the packet. Perhaps a second callback? Perhaps return the Request and let them 'on' their own "bulk-reply" handler?
Here's a possibly dumb idea; ignore if it doesn't appeal to you:
What if we simply presented bulk replies as the same general kind of JS objects as JSON replies, except that the bulk replies are objects with the actor/type/length/stream/done/copyTo properties, as passed to your bulk-reply event listeners now? (As suggested below, I think "actor" should become "to".)
Then, handler code presented with a packet in the wrong format would simply fail to recognize the type; or fail to find the properties it expects; and those are cases which we already have to handle anyway.
@@ +623,5 @@
> + *
> + * @param request Object
> + * This is modeled after the format of JSON packets above, but does not
> + * actually contain the data, but is instead just a routing header:
> + * * actor: Actor that will receive the packet
If we're modeling it on the JSON packets, should 'actor' be 'to'? This would make Request.prototype.[[get actor]] simpler, I think.
@@ +630,5 @@
> + * @return Request
> + * This object emits a number of events to allow to respond to
> + * different parts of the bulk request lifecycle:
> + * * bulk-send-ready: Ready to send bulk data to the server, using the
> + * using the event data object containing:
"using the using the"
@@ +678,5 @@
> +
> + if (!this.mainRoot) {
> + throw Error("Have not yet received a hello packet from the server.");
> + }
> + let type = request.type || "";
Shouldn't we throw if there's no type, too?
Assignee | ||
Comment 60•11 years ago
|
||
As mentioned, we'll need to change the forwarding prefix from ":" to "/", so that packet header can be parsed by reading until the next ":".
I've tested a b2g build with this patch, and I am able to debug an app successfully.
Attachment #8397479 -
Flags: review?(jimb)
Assignee | ||
Comment 61•11 years ago
|
||
This patch breaks out the purely stylistic / mechanical changes to transport.js that used to be mixed into part 1.
Attachment #8397480 -
Flags: review?(jimb)
Assignee | ||
Comment 62•11 years ago
|
||
Summary of changes made since the last review of part 1:
* Reading packet headers to first ":" and using regexs to parse them further
* Removed segment layer, remaining pieces folded into packet layer
* Changed stream copier to use |tell()| to always know how much was copied
* Removed all logging closures
* Removed usages of readInputStreamToString
* Only one scriptable stream created now (by the transport)
* Actor / method comment updates
* Use Services.tm.currentThread
Still working on updating part 2, but should have that soon.
(In reply to Jim Blandy :jimb from comment #54)
> Comment on attachment 8372773 [details] [diff] [review]
> Part 1 (v2): Bulk data support in the transport layer
>
> Review of attachment 8372773 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: toolkit/devtools/transport/stream-copier.js
> @@ +81,5 @@
> > + bytesCopied = this.output.writeFrom(this.input, amountToCopy);
> > + } catch(e if e.result == Cr.NS_BASE_STREAM_WOULD_BLOCK) {
> > + // Some data may still have been copied, let's try to find out.
> > + bytesCopied = bytesAvailable - this.input.available();
> > + this.amountLeft -= bytesCopied;
>
> Doesn't this assume that this.input can't acquire more readable data since
> the last time we called this.input.available? Can that ever happen? If it
> did, then bytesCopied would be an underestimate, or possibly even negative.
It does make that assumption, good catch! I did not experience such a situation in my own tests, but since sockets are read / written on background threads, it seems feasible that the numbers could indeed differ.
I've replaced this mechanism with a new approach where we always wrap the output stream in a buffered stream. It gives us access to |tell()|, which reports exactly how many bytes have been written, which is what we were trying to achieve here anyway. This should remove the confusion possibility, since we aren't double-reading counters that could move underneath us.
> @@ +85,5 @@
> > + this.amountLeft -= bytesCopied;
> > + dumpv(() => "Copied: " + bytesCopied + ", Left: " + this.amountLeft);
> > + dumpv(() => "Waiting for output stream");
> > + let threadManager = Cc["@mozilla.org/thread-manager;1"].getService();
> > + this.output.asyncWait(this, 0, 0, threadManager.currentThread);
>
> If 'Services' is in scope, can we just say Services.tm.currentThread?
We can, fixed.
> @@ +94,5 @@
> > + dumpv(() => "Copied: " + bytesCopied + ", Left: " + this.amountLeft);
> > +
> > + if (this.amountLeft === 0) {
> > + dumpv(() => "Copy done!");
> > + this.output.flush();
>
> It seems to me we should only flush if we created the buffered-output-stream
> ourselves. If someone else created it, then wouldn't this force it to be
> flushed earlier than it needs to be?
We now always create the buffered stream, so I've kept the flush here. Either way, I don't know that there is too much harm in it.
> ::: toolkit/devtools/transport/transport.js
> @@ +18,5 @@
> > +/**
> > + * A verbose logger for low-level transport tracing.
> > + * @param function msgFactory
> > + * A function that returns the message to log (so that arguments aren't
> > + * computed unless they'll be used)
>
> I don't think this optimization actually wins.
You're right, it doesn't! :( I was hoping it would save me in the case that building the logging string means doing lots of extra work, but paying for the all the closures is also expensive. Here's a totally unscientific comparison I threw together: http://jsperf.com/logging-string-closure/2
I've converted this to the more typical case of just taking in a string. Where there is extra computation for logging (only a few cases it turns out), I guard at the call site (again, as per usual).
> @@ +65,5 @@
> > *
> > + * - onBulkPacket(packet) - called when we have switched to bulk packet
> > + * receiving mode. |packet| is an object containing:
> > + * * actor: Actor that will receive the packet
> > + * * type: Method of the actor that should be called on receipt
>
> Should these say "Name of actor" and "Name of method"?
Fixed.
> @@ +69,5 @@
> > + * * type: Method of the actor that should be called on receipt
> > + * * length: Size of the data to be read
> > + * * stream: This input stream should only be used directly if you can ensure
> > + * that you will read exactly |length| bytes and will not close the
> > + * stream when writing is complete
>
> s/writing/reading/
Fixed.
> - If I'm reading right, the onBulkPacket handler and startBulkSend's
> returned promise's
> 'done' promise give DebuggerTransport all the information it needs to call
> {pause,resume}{Incoming,Outgoing} itself. So those don't need to be public
> at all.
I've left these public, as they are only needed for bulk packets, so BulkPacket should be the one to trigger these changes at precisely the right time. I would strongly prefer to avoid having the transport test the type of packets in it's incoming / outgoing queues, which seems like it would have to keep these methods internal.
> ::: toolkit/devtools/transport/packets.js
> @@ +113,5 @@
> > + set object(object) {
> > + this._object = object;
> > + let json = JSON.stringify(object);
> > + json = unicodeConverter.ConvertFromUnicode(json);
> > + this._lengthSegment.data = json.length;
>
> This is a minor thing, but:
>
> Am I understanding correctly that, even though this sets the 'data' property
> of the DelimitedSegment instance to a number, it will be converted into a
> string by the concatenation with the delimiter in
> DelimitedSegment.prototype.write? This feels delicate, since
> DelimitedSegment doesn't seem to promise to behave this way; the comments in
> DS.p.write seem to suggest that concatenating the delimiter is just an
> implementation detail.
The relevant code in this version attempts to convert more explicitly.
Attachment #8372773 -
Attachment is obsolete: true
Attachment #8404130 -
Flags: review?(jimb)
Assignee | ||
Comment 63•11 years ago
|
||
Summary of changes since last review of part 2:
* SDK event methods now on Request's prototype
* Added support for JSON request with bulk reply
(In reply to Jim Blandy :jimb from comment #59)
> Comment on attachment 8372774 [details] [diff] [review]
> Part 2 (v2): Bulk data support in client and server
>
> Review of attachment 8372774 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> Still reviewing; some comments so far:
>
> ::: toolkit/devtools/client/dbg-client.jsm
> @@ +610,5 @@
> > this._sendRequests();
> > },
> >
> > /**
> > + * Transmit streaming data via a bulk request.
>
> If I'm reading this right, we're only prepared for bulk replies to bulk
> requests, not bulk replies to JSON requests, is that correct?
I've gone with the option to return the |Request| object in the JSON request path. So, those callers can then use |on("bulk-reply", ...)| as you would in the bulk request path to receive a bulk reply.
> @@ +623,5 @@
> > + *
> > + * @param request Object
> > + * This is modeled after the format of JSON packets above, but does not
> > + * actually contain the data, but is instead just a routing header:
> > + * * actor: Actor that will receive the packet
>
> If we're modeling it on the JSON packets, should 'actor' be 'to'? This would
> make Request.prototype.[[get actor]] simpler, I think.
I thought about this for a bit too, but it just pushes the confusion down to the transport layer where packet reading / writing will have more things like "from || to". By using the non-directed property name "actor" in all places, we can avoid worrying about this.
It seems fine to differ in this way, since we have a new client method for sending a bulk request, so there shouldn't be a clash for existing callers.
> @@ +630,5 @@
> > + * @return Request
> > + * This object emits a number of events to allow to respond to
> > + * different parts of the bulk request lifecycle:
> > + * * bulk-send-ready: Ready to send bulk data to the server, using the
> > + * using the event data object containing:
>
> "using the using the"
Fixed.
> @@ +678,5 @@
> > +
> > + if (!this.mainRoot) {
> > + throw Error("Have not yet received a hello packet from the server.");
> > + }
> > + let type = request.type || "";
>
> Shouldn't we throw if there's no type, too?
Sure, I've added another check for that case.
Attachment #8372774 -
Attachment is obsolete: true
Attachment #8372774 -
Flags: review?(jimb)
Attachment #8404393 -
Flags: review?(jimb)
Reporter | ||
Updated•11 years ago
|
Attachment #8397480 -
Flags: review?(jimb) → review+
Reporter | ||
Comment 64•11 years ago
|
||
Comment on attachment 8397479 [details] [diff] [review]
Part -1: Change forwarding prefix to slash
Review of attachment 8397479 [details] [diff] [review]:
-----------------------------------------------------------------
You need to take care of toolkit/devtools/server/tests/unit/test_forwardingprefix.js too.
Did you run the xpcshell tests on this patch? If so, why didn't those tests fail?
Attachment #8397479 -
Flags: review?(jimb)
Assignee | ||
Comment 65•11 years ago
|
||
Not sure how I missed that... The test did indeed fail (or at least it hung after printing many PASS messages). ;)
I'm guessing I was running a subset of the tests before, but hard to say for sure.
Attachment #8397479 -
Attachment is obsolete: true
Attachment #8411788 -
Flags: review?(jimb)
Reporter | ||
Comment 66•11 years ago
|
||
Comment on attachment 8411788 [details] [diff] [review]
Part -1 (v2): Change forwarding prefix to slash
Review of attachment 8411788 [details] [diff] [review]:
-----------------------------------------------------------------
Yay. This could be landed immediately, to shake out problems; just put [leave open] in the whiteboard.
Attachment #8411788 -
Flags: review?(jimb) → review+
Reporter | ||
Comment 67•11 years ago
|
||
Comment on attachment 8397480 [details] [diff] [review]
Part 0: Clean up transport style
Review of attachment 8397480 [details] [diff] [review]:
-----------------------------------------------------------------
This could also be landed immediately.
Reporter | ||
Comment 68•11 years ago
|
||
Comment on attachment 8404130 [details] [diff] [review]
Part 1 (v3): Bulk data support in the transport layer
Review of attachment 8404130 [details] [diff] [review]:
-----------------------------------------------------------------
Still reviewing.
::: toolkit/devtools/transport/stream-utils.js
@@ +54,5 @@
> + // Some output streams are already buffered, so this is not always required
> + // for buffering alone. However, this extra layer also gives us access to the
> + // nsISeekableStream interface that buffered streams happen to support, which
> + // has the |tell()| function. We can use this as a clear way to always know
> + // exactly how much we have written.
Ah! I finally understand why this is necessary!!!
The nsIOutputStream::write method definitely promises that, if it returns NS_BASE_STREAM_WOULD_BLOCK, it didn't transfer any data. Otherwise, nsBufferedOutputStream::Flush and friends would be broken.
The specific problem is that nsBufferedOutputStream::WriteSegments does not preserve this property. It can go around the loop a few times and transfer data, and then return an NS_BASE_STREAM_WOULD_BLOCK result from a subsequent Write to the underlying stream:
https://hg.mozilla.org/mozilla-central/file/cfde3603b020/netwerk/base/src/nsBufferedStreams.cpp#l697
Filed as bug 1004313.
Reporter | ||
Comment 69•10 years ago
|
||
Comment on attachment 8404130 [details] [diff] [review]
Part 1 (v3): Bulk data support in the transport layer
Review of attachment 8404130 [details] [diff] [review]:
-----------------------------------------------------------------
Okay, this looks good. Really nice code in transport.js and stream-utils.js.
::: toolkit/devtools/transport/packets.js
@@ +207,5 @@
> + *
> + * bulk [actor] [type] [length]:[data]
> + *
> + * The data portion is application-defined and does not follow any one
> + * specification. See the Remote Debugging Protocol Stream Transport spec for
"The interpretation of the data portion depends on the kind of actor and the packet's type." would be more accurate.
@@ +308,5 @@
> + this._transport.pauseOutgoing();
> +
> + let deferred = defer();
> +
> + this._readyForWriting.resolve({
Should we just use this.streamReadyForWriting here? Otherwise, it's effectively part of BulkPacket's contract that someone invokes the readyForWriting getter before calling 'write', which, while likely, seems delicate and obscure.
::: toolkit/devtools/transport/stream-utils.js
@@ +54,5 @@
> + // Some output streams are already buffered, so this is not always required
> + // for buffering alone. However, this extra layer also gives us access to the
> + // nsISeekableStream interface that buffered streams happen to support, which
> + // has the |tell()| function. We can use this as a clear way to always know
> + // exactly how much we have written.
Now that bug 1004313 has landed, it should be safe to assume that a NS_BASE_STREAM_WOULD_BLOCK error means that no data has been transferred. I did an audit of the code base's nsIOutputStream::writeFrom implementations, and the nsIInputStream::ReadSegments implementations that some of the writeFrom implementations might use, and as far as I can see, they should all be safe. So this code can be simplified to use the 'output' stream as-is.
(The exceptions are as follows:
nsTemporaryFileInputStream::ReadSegments: ignores all errors or partial counts from writer; would drop data. Not accessible from JS, so not relevant. Filed bug 1005313, with patch.
AndroidCameraInputStream: mishandles partial write counts, will garble stream. Filed bug 1006141.)
@@ +59,5 @@
> + this.output = Cc["@mozilla.org/network/buffered-output-stream;1"].
> + createInstance(Ci.nsIBufferedOutputStream);
> + this.output.init(output, BUFFER_SIZE);
> + this.output.QueryInterface(Ci.nsISeekableStream);
> + this._outputOffset = 0;
Suggestion, if it's somehow not practical to remove the buffered stream altogether: it seems like this._outputOffset is synonymous with this.output.tell(); so you could eliminate the property, and just check 'this.output.tell()' before and after the write to see if anything had been written.
@@ +87,5 @@
> + }
> + return this.copied;
> + },
> +
> + _copy: function() {
I like the way this function always terminates in one of three ways:
1) after calling this.baseAsyncOutput.asyncWait
2) after calling this.input.asyncWait
3) after all data has been written
The asyncWait handling is so simple it must be correct.
@@ +127,5 @@
> + },
> +
> + _flush: function() {
> + try {
> + this.output.flush();
If nsBufferedOutputStream::Flush fails to flush its entire buffer, but does write some data, then it throws NS_ERROR_FAILURE, not NS_BASE_STREAM_WOULD_BLOCK.
https://hg.mozilla.org/mozilla-central/file/2897fb554990/netwerk/base/src/nsBufferedStreams.cpp#l635
@@ +166,5 @@
> +};
> +
> +/**
> + * Read from a stream, one byte at a time, up to the next |delimiter|
> + * character, but stopping if we've read |count| without finding it.
The comment should also explain that we stop before we would block.
You might want to mention bug 984651 in this comment.
::: toolkit/devtools/transport/tests/unit/test_dbgsocket_connection_drop.js
@@ +40,5 @@
> return test_helper('27asd:{"to":"root","type":"echo"}');
> }
>
> function test_socket_conn_drops_after_too_long_header() {
> + return test_helper('4305724038957487634549823475894325:');
There are two things to test here: header length, and length of the alleged packet. I think this changes the former to the latter; it would be nice to test both.
::: toolkit/devtools/transport/transport.js
@@ +103,3 @@
>
> this.hooks = null;
> + this.active = false;
Suggestion, if you like: it seems like the spots where we check the 'active' flag might more simply check whether 'hooks' is set, since we zap 'hooks' whenever we clear 'active'. One less bit of state.
@@ +181,2 @@
> this._input.close();
> this._output.close();
Should we also close _scriptableInput here?
@@ +257,1 @@
> this._flushOutgoing();
I gather that, since we call _flushOutgoing() after calling a packet's write, we always clean out 'done' packets from the head of the queue in the same event dispatch that finished them, so there's never a chance we'll call packet.write when the packet is done. Does that sound correct?
@@ +328,5 @@
> + this._waitForIncoming();
> + } catch(e if e.result == Cr.NS_BASE_STREAM_CLOSED ||
> + e.result == Cr.NS_ERROR_CONNECTION_REFUSED ||
> + e.result == Cr.NS_ERROR_OFFLINE) {
> + this.close(e.result);
Do we want to conceal these exceptions? Should the catch clause re-throw?
@@ +539,5 @@
> + *
> + * This case is much simpler than the full DebuggerTransport, since there is
> + * no primary stream we have to worry about managing while we hand it off to
> + * others temporarily. Instead, we can just make a single use pipe and be
> + * done with it.
This made me laugh. I wasn't sure what bulk packets really meant for LocalDebuggerTransport.
It seems fine to have this for symmetry's sake, but if anyone's actually making this code run in a real Firefox, then we should be doing whatever we're trying to accomplish differently.
Attachment #8404130 -
Flags: review?(jimb) → review+
Reporter | ||
Comment 70•10 years ago
|
||
Comment on attachment 8404393 [details] [diff] [review]
Part 2 (v3): Bulk data support in client and server
Review of attachment 8404393 [details] [diff] [review]:
-----------------------------------------------------------------
Okay --- I think I'm done looking this over, and I have some questions.
::: toolkit/devtools/client/dbg-client.jsm
@@ +618,5 @@
> * @param aRequest object
> * A JSON packet to send to the debugging server.
> * @param aOnResponse function
> * If specified, will be called with the response packet when
> * debugging server responds.
This should specify that aOnResponse only gets JSON packets.
@@ +621,5 @@
> * If specified, will be called with the response packet when
> * debugging server responds.
> + * @return Request
> + * This object emits a number of events to allow you to respond to
> + * different parts of the request lifecycle.
This comment intimidated me, because "lifecycle" made me think "began sending; sending complete; began receiving reply; ...". But it's simply replies, right?
@@ +716,5 @@
> + * passed as event data.
> + * * bulk-reply: The server replied with bulk data, which you can read
> + * using the event data object containing:
> + * * actor: Name of actor that received the packet
> + * * type: Name of actor's method that was called on receipt
As far as the existing JSON packets are concerned, many replies don't actually have types. I think this should say:
type: The type the server assigned the bulk reply packet, if any.
::: toolkit/devtools/server/actors/root.js
@@ +162,5 @@
> RootActor.prototype = {
> constructor: RootActor,
> applicationType: "browser",
>
> + traits: {
Would Object.freeze({...}) be a good idea here, since this object is going to be shared?
::: toolkit/devtools/server/main.js
@@ +1037,5 @@
>
> return null;
> },
>
> + _getOrCreateActor: function(actorID) {
Thanks for abstracting this out.
@@ +1091,5 @@
> message: errorString
> };
> },
>
> + _queueResponse: function(actorID, from, type, response) {
And thanks here, too.
If I'm reading the original code right, |actorID| and |from| should always be equal. The |_actorResponses| table isn't supposed to be establishing any ordering between different actors' packets, so it must be true. So we only need one argument.
@@ +1173,5 @@
> }
>
> + // Clone the packet, so we can safely add the reply helper
> + aPacket = Cu.cloneInto(aPacket, {});
> + // Helper for replying with bulk data from the actor
Why is it helpful to store this closure on the packet? The closure uses only information available from the packet itself (.to; .type), or available to the actor (the transport), so I don't see why it can't just be another method on DebuggerServerConnection.prototype.
@@ +1204,5 @@
> '" does not recognize the packet type "' +
> aPacket.type + '"') };
> }
>
> + // There will not be a return value if a buly reply is sent
s/buly/bulk/, and needs a period.
@@ +1225,5 @@
> + * not close the stream when reading is complete
> + * * done: If you use the stream directly (instead of |copyTo|
> + * below), you must signal completion by resolving /
> + * rejecting this deferred. If you do use |copyTo|, this is
> + * taken care of for you when copying completes.
This comment should specify what the value that the promise is resolved / rejected to gets used for.
@@ +1258,5 @@
> + ret = actor.requestTypes[type].call(actor, packet);
> + } catch(e) {
> + this.transport.send(this._unknownError(
> + "error occurred while processing bulk packet '" + type, e));
> + packet.done.reject(e);
Here we're rejecting packet.done passing an exception object...
@@ +1265,5 @@
> + ret = { error: "unrecognizedPacketType",
> + message: ('Actor "' + actorKey +
> + '" does not recognize the bulk packet type "' +
> + type + '"') };
> + packet.done.reject(ret);
And here we're rejecting packet.done passing a packet. How is this meant to work? From the other patch, it looks like it's ignored...
Reporter | ||
Updated•10 years ago
|
Flags: needinfo?(jryans)
Assignee | ||
Comment 71•10 years ago
|
||
(In reply to Jim Blandy :jimb from comment #70)
> @@ +621,5 @@
> > * If specified, will be called with the response packet when
> > * debugging server responds.
> > + * @return Request
> > + * This object emits a number of events to allow you to respond to
> > + * different parts of the request lifecycle.
>
> This comment intimidated me, because "lifecycle" made me think "began
> sending; sending complete; began receiving reply; ...". But it's simply
> replies, right?
Perhaps a different word is better here... but in any case the comment goes on to describe the list of events. For sending JSON it's:
* json-reply
* bulk-reply
and when sending bulk, it's:
* bulk-send-ready
* json-reply
* bulk-reply
> @@ +716,5 @@
> > + * passed as event data.
> > + * * bulk-reply: The server replied with bulk data, which you can read
> > + * using the event data object containing:
> > + * * actor: Name of actor that received the packet
> > + * * type: Name of actor's method that was called on receipt
>
> As far as the existing JSON packets are concerned, many replies don't
> actually have types. I think this should say:
>
> type: The type the server assigned the bulk reply packet, if any.
The "type" here is the same value as "type" field that was in the original request from client to server. As with the "actor" field, it is simply echoed back here. The reply helper in the server defaults to copying these two values from the original incoming packet the server received. My expectation is that they would remain unmodified in most situations.
> ::: toolkit/devtools/server/actors/root.js
> @@ +162,5 @@
> > RootActor.prototype = {
> > constructor: RootActor,
> > applicationType: "browser",
> >
> > + traits: {
>
> Would Object.freeze({...}) be a good idea here, since this object is going
> to be shared?
Sure, that seems to make sense to me.
> @@ +1173,5 @@
> > }
> >
> > + // Clone the packet, so we can safely add the reply helper
> > + aPacket = Cu.cloneInto(aPacket, {});
> > + // Helper for replying with bulk data from the actor
>
> Why is it helpful to store this closure on the packet? The closure uses only
> information available from the packet itself (.to; .type), or available to
> the actor (the transport), so I don't see why it can't just be another
> method on DebuggerServerConnection.prototype.
I did this because many actor methods today don't access their DebuggerServerConnection object when replying, since in the case of JSON replies, you just need to return a value to effectively reply.
Also, if it were moved to DebuggerServerConnection.prototype.replyBulk (or something like that), then the actor has to copy over the actor & type fields itself (or possibly pass in the original packet solely for the purpose of achieving the same copy as this helper here). By using a closure here, the actor only needs to include the additional information we can't know already, which is mainly the length field.
So, I think the helper is simpler for users of the API, since it avoids repetition.
> @@ +1225,5 @@
> > + * not close the stream when reading is complete
> > + * * done: If you use the stream directly (instead of |copyTo|
> > + * below), you must signal completion by resolving /
> > + * rejecting this deferred. If you do use |copyTo|, this is
> > + * taken care of for you when copying completes.
>
> This comment should specify what the value that the promise is resolved /
> rejected to gets used for.
Agreed, I'll update it. No value is expected when resolved.
> Here we're rejecting packet.done passing an exception object...
> And here we're rejecting packet.done passing a packet. How is this meant to work?
> From the other patch, it looks like it's ignored...
When it's rejected, the transport is closed and the exception or packet is used as the reason value.
In the Part 1 patch, see BulkPacket.prototype.read from transport/packet.js. In lines 273 - 278, it handles the resolution or rejection there.
Flags: needinfo?(jryans)
Reporter | ||
Comment 72•10 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #71)
> (In reply to Jim Blandy :jimb from comment #70)
> > Why is it helpful to store this closure on the packet? The closure uses only
> > information available from the packet itself (.to; .type), or available to
> > the actor (the transport), so I don't see why it can't just be another
> > method on DebuggerServerConnection.prototype.
>
> I did this because many actor methods today don't access their
> DebuggerServerConnection object when replying, since in the case of JSON
> replies, you just need to return a value to effectively reply.
>
> Also, if it were moved to DebuggerServerConnection.prototype.replyBulk (or
> something like that), then the actor has to copy over the actor & type
> fields itself (or possibly pass in the original packet solely for the
> purpose of achieving the same copy as this helper here). By using a closure
> here, the actor only needs to include the additional information we can't
> know already, which is mainly the length field.
>
> So, I think the helper is simpler for users of the API, since it avoids
> repetition.
I don't think it should be placed on JSON packets, because it breaks the "type" of those values: they're supposed to be the parsed form of a JSON blob. Bulk packets should be rare; it doesn't seem much to ask actors to go to their transport when they need to send one. Even JSON-only actors need to do this anyway if their interaction pattern isn't a simple request-reply pattern.
However, placing it on the bulk packets' "packet" value seems fine and appropriate. That's a form that's under our control.
Assignee | ||
Comment 73•10 years ago
|
||
(In reply to Jim Blandy :jimb from comment #72)
> (In reply to J. Ryan Stinnett [:jryans] from comment #71)
> > (In reply to Jim Blandy :jimb from comment #70)
> > > Why is it helpful to store this closure on the packet? The closure uses only
> > > information available from the packet itself (.to; .type), or available to
> > > the actor (the transport), so I don't see why it can't just be another
> > > method on DebuggerServerConnection.prototype.
> >
> > I did this because many actor methods today don't access their
> > DebuggerServerConnection object when replying, since in the case of JSON
> > replies, you just need to return a value to effectively reply.
> >
> > Also, if it were moved to DebuggerServerConnection.prototype.replyBulk (or
> > something like that), then the actor has to copy over the actor & type
> > fields itself (or possibly pass in the original packet solely for the
> > purpose of achieving the same copy as this helper here). By using a closure
> > here, the actor only needs to include the additional information we can't
> > know already, which is mainly the length field.
> >
> > So, I think the helper is simpler for users of the API, since it avoids
> > repetition.
>
> I don't think it should be placed on JSON packets, because it breaks the
> "type" of those values: they're supposed to be the parsed form of a JSON
> blob. Bulk packets should be rare; it doesn't seem much to ask actors to go
> to their transport when they need to send one. Even JSON-only actors need to
> do this anyway if their interaction pattern isn't a simple request-reply
> pattern.
>
> However, placing it on the bulk packets' "packet" value seems fine and
> appropriate. That's a form that's under our control.
Okay, I understand the argument to leave JSON forms alone. For symmetry then, I'll omit the reply helper from both packet types. I'll offer startBulkSend on DSC.prototype, just like send today, which actors will use for replying.
Reporter | ||
Comment 74•10 years ago
|
||
(In reply to J. Ryan Stinnett [:jryans] from comment #71)
> Perhaps a different word is better here... but in any case the comment goes
> on to describe the list of events.
Yeah; the comment is fine as is.
Reporter | ||
Comment 75•10 years ago
|
||
Comment on attachment 8404393 [details] [diff] [review]
Part 2 (v3): Bulk data support in client and server
Review of attachment 8404393 [details] [diff] [review]:
-----------------------------------------------------------------
Looks great; r=me with comments addressed.
::: toolkit/devtools/server/main.js
@@ +1265,5 @@
> + ret = { error: "unrecognizedPacketType",
> + message: ('Actor "' + actorKey +
> + '" does not recognize the bulk packet type "' +
> + type + '"') };
> + packet.done.reject(ret);
We discussed this on IRC; we decided to create an exception to pass to the rejection, and also construct the packet to serve as the return value.
Attachment #8404393 -
Flags: review?(jimb) → review+
Assignee | ||
Comment 76•10 years ago
|
||
(In reply to Jim Blandy :jimb from comment #69)
> Comment on attachment 8404130 [details] [diff] [review]
> Part 1 (v3): Bulk data support in the transport layer
>
> Review of attachment 8404130 [details] [diff] [review]:
> -----------------------------------------------------------------
> @@ +308,5 @@
> > + this._transport.pauseOutgoing();
> > +
> > + let deferred = defer();
> > +
> > + this._readyForWriting.resolve({
>
> Should we just use this.streamReadyForWriting here? Otherwise, it's
> effectively part of BulkPacket's contract that someone invokes the
> readyForWriting getter before calling 'write', which, while likely, seems
> delicate and obscure.
Changed to init |_readyForWriting| in the constructor, so it's always safe to use. Here we need the actual deferred, not the promise that |streamReadyForWriting| would give us.
> ::: toolkit/devtools/transport/stream-utils.js
> @@ +54,5 @@
> > + // Some output streams are already buffered, so this is not always required
> > + // for buffering alone. However, this extra layer also gives us access to the
> > + // nsISeekableStream interface that buffered streams happen to support, which
> > + // has the |tell()| function. We can use this as a clear way to always know
> > + // exactly how much we have written.
>
> Now that bug 1004313 has landed, it should be safe to assume that a
> NS_BASE_STREAM_WOULD_BLOCK error means that no data has been transferred. I
> did an audit of the code base's nsIOutputStream::writeFrom implementations,
> and the nsIInputStream::ReadSegments implementations that some of the
> writeFrom implementations might use, and as far as I can see, they should
> all be safe. So this code can be simplified to use the 'output' stream as-is.
I was indeed able to remove the special handling of NS_BASE_STREAM_WOULD_BLOCK, so the buffered stream / tell() is no longer needed in many cases. :D Thanks!
> @@ +127,5 @@
> > + },
> > +
> > + _flush: function() {
> > + try {
> > + this.output.flush();
>
> If nsBufferedOutputStream::Flush fails to flush its entire buffer, but does
> write some data, then it throws NS_ERROR_FAILURE, not
> NS_BASE_STREAM_WOULD_BLOCK.
>
> https://hg.mozilla.org/mozilla-central/file/2897fb554990/netwerk/base/src/
> nsBufferedStreams.cpp#l635
Interesting! So many error codes... I'm now catching this one too. Though interestingly, I don't believe I've see it actually occur. Anyways, now we're handling it if it does.
> ::: toolkit/devtools/transport/tests/unit/test_dbgsocket_connection_drop.js
> @@ +40,5 @@
> > return test_helper('27asd:{"to":"root","type":"echo"}');
> > }
> >
> > function test_socket_conn_drops_after_too_long_header() {
> > + return test_helper('4305724038957487634549823475894325:');
>
> There are two things to test here: header length, and length of the alleged
> packet. I think this changes the former to the latter; it would be nice to
> test both.
You're right, I've added an extra test for the other case.
> @@ +181,2 @@
> > this._input.close();
> > this._output.close();
>
> Should we also close _scriptableInput here?
Good idea, I've added that.
> @@ +257,1 @@
> > this._flushOutgoing();
>
> I gather that, since we call _flushOutgoing() after calling a packet's
> write, we always clean out 'done' packets from the head of the queue in the
> same event dispatch that finished them, so there's never a chance we'll call
> packet.write when the packet is done. Does that sound correct?
Yes, that's correct.
> @@ +328,5 @@
> > + this._waitForIncoming();
> > + } catch(e if e.result == Cr.NS_BASE_STREAM_CLOSED ||
> > + e.result == Cr.NS_ERROR_CONNECTION_REFUSED ||
> > + e.result == Cr.NS_ERROR_OFFLINE) {
> > + this.close(e.result);
>
> Do we want to conceal these exceptions? Should the catch clause re-throw?
I think it's okay to absorb them, as they are "normal" when the other side of a socket disappears or closes their connection.
Assignee | ||
Comment 77•10 years ago
|
||
Attachment #8411788 -
Attachment is obsolete: true
Attachment #8421500 -
Flags: review+
Assignee | ||
Comment 78•10 years ago
|
||
Attachment #8397480 -
Attachment is obsolete: true
Attachment #8421501 -
Flags: review+
Assignee | ||
Comment 79•10 years ago
|
||
Attachment #8404130 -
Attachment is obsolete: true
Attachment #8421502 -
Flags: review+
Assignee | ||
Comment 80•10 years ago
|
||
Attachment #8404393 -
Attachment is obsolete: true
Attachment #8421503 -
Flags: review+
Assignee | ||
Comment 81•10 years ago
|
||
Attachment #8375129 -
Attachment is obsolete: true
Attachment #8421504 -
Flags: review+
Assignee | ||
Comment 82•10 years ago
|
||
Comment on attachment 8421502 [details] [diff] [review]
Part 1 (v4, jimb: r+): Bulk data support in the transport layer
Eddy, this is the bulk data transport patch I mentioned. I've now rebased it onto your various recent server and transport changes. Jim has already reviewed the functionality here, but I just want to ensure I am not creating new headaches for your worker debugging work.
Let me know if you see something I am importing / requiring, etc. that needs to change to support the worker case.
Attachment #8421502 -
Flags: review?(ejpbruel)
Assignee | ||
Comment 83•10 years ago
|
||
The rest of the patches here now cause a few Network Monitor tests to leak because more events keep coming in after the monitor has been torn down.
Attachment #8422019 -
Flags: review?(vporof)
Comment 84•10 years ago
|
||
Comment on attachment 8422019 [details] [diff] [review]
Part 4: Fix Network Monitor test leaks
Review of attachment 8422019 [details] [diff] [review]:
-----------------------------------------------------------------
::: browser/devtools/netmonitor/test/browser_net_filter-04.js
@@ -22,5 @@
> "The first filter type is correct.");
> is(Prefs.filters[1], "bogus",
> "The second filter type is invalid, but loaded anyway.");
>
> - waitForNetworkEvents(aMonitor, 7).then(() => {
Does this mean we weren't waiting for enough events in the first place? Sigh...
Attachment #8422019 -
Flags: review?(vporof) → review+
Assignee | ||
Comment 85•10 years ago
|
||
(In reply to Victor Porof [:vporof][:vp] from comment #84)
> Comment on attachment 8422019 [details] [diff] [review]
> Part 4: Fix Network Monitor test leaks
>
> Review of attachment 8422019 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: browser/devtools/netmonitor/test/browser_net_filter-04.js
> @@ -22,5 @@
> > "The first filter type is correct.");
> > is(Prefs.filters[1], "bogus",
> > "The second filter type is invalid, but loaded anyway.");
> >
> > - waitForNetworkEvents(aMonitor, 7).then(() => {
>
> Does this mean we weren't waiting for enough events in the first place?
> Sigh...
Yeah, I think some of the events for the 8th request might have come in by then, just not all, or something like that.
Assignee | ||
Comment 86•10 years ago
|
||
Attachment #8421502 -
Attachment is obsolete: true
Attachment #8421502 -
Flags: review?(ejpbruel)
Attachment #8422647 -
Flags: review+
Assignee | ||
Comment 87•10 years ago
|
||
Attachment #8421503 -
Attachment is obsolete: true
Attachment #8422649 -
Flags: review+
Assignee | ||
Comment 88•10 years ago
|
||
Attachment #8421504 -
Attachment is obsolete: true
Attachment #8422651 -
Flags: review+
Assignee | ||
Comment 89•10 years ago
|
||
Try is green: https://tbpl.mozilla.org/?tree=Try&rev=0c8fd1b74440
I've also manually tested:
* Desktop
* Browser Toolbox
* b2g
I tried Fennec too, but currently it's broken by bug 859372.
Assignee | ||
Comment 90•10 years ago
|
||
remote: https://hg.mozilla.org/integration/fx-team/rev/f20466641641
remote: https://hg.mozilla.org/integration/fx-team/rev/463401983c4c
remote: https://hg.mozilla.org/integration/fx-team/rev/e97f6c0bbdd6
remote: https://hg.mozilla.org/integration/fx-team/rev/f6290ce0b388
remote: https://hg.mozilla.org/integration/fx-team/rev/798860192cb4
remote: https://hg.mozilla.org/integration/fx-team/rev/1f71b3099720
Whiteboard: [b2g][debugger-docs-needed] → [b2g][debugger-docs-needed][fixed-in-fx-team]
https://hg.mozilla.org/mozilla-central/rev/f20466641641
https://hg.mozilla.org/mozilla-central/rev/463401983c4c
https://hg.mozilla.org/mozilla-central/rev/e97f6c0bbdd6
https://hg.mozilla.org/mozilla-central/rev/f6290ce0b388
https://hg.mozilla.org/mozilla-central/rev/798860192cb4
https://hg.mozilla.org/mozilla-central/rev/1f71b3099720
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Whiteboard: [b2g][debugger-docs-needed][fixed-in-fx-team] → [b2g][debugger-docs-needed]
Target Milestone: --- → Firefox 32
Updated•6 years ago
|
Product: Firefox → DevTools
You need to log in
before you can comment on or make changes to this bug.
Description
•