Closed
Bug 842243
Opened 12 years ago
Closed 11 years ago
Implement the framework and encoding pipeline of MediaEncoder
Categories
(Core :: Audio/Video: Recording, defect)
Tracking
()
Tracking | Status | |
---|---|---|
firefox26 | --- | fixed |
People
(Reporter: chiajung, Assigned: shelly)
References
(Blocks 2 open bugs)
Details
(Whiteboard: [MR1.2][FT: Media Recording], [Sprint: 2])
Attachments
(6 files, 72 obsolete files)
(deleted),
patch
|
shelly
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
shelly
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
shelly
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
shelly
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
shelly
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
shelly
:
review+
|
Details | Diff | Splinter Review |
We need a MediaEncoder framework to encode MediaStream for network transporting, media file generation, transcoding or simply AudioRecord application.
Currently, we have a wiki page to track current framework design at:
https://wiki.mozilla.org/Gecko:MediaRecorder
And we want to implement encoder for Audio first with MediaRecord API.
Reporter | ||
Comment 1•12 years ago
|
||
Hi roc,
Can you help to review the architecture? We would like to make sure the architecture is OK then start to implement Audio part first.
We have a simple architecture sketch, basic interface definition and basic code flow based on the architecture in the wiki page.
If you notice any problem, please point out in this bug!!
Thanks
Flags: needinfo?(roc)
A few quick comments:
-- Internally we shouldn't need generic parameter structures like CodecParams. We can specify exactly the set of parameters we support.
-- A comment for each class describing what that class is for would be really helpful. In particular I'm not sure what the role of the MediaSegmentAdapter classes is.
-- What's status_t? We should try to minimize the number of methods that return an error.
-- Instead of "Codec", I think "Encoder" is a better name. "Codec" could mean encoding or decoding.
-- I suggest creating AudioTrackEncoder interface and VideoTrackEncoder interfaces instead of MediaCodec. Change MediaWriter to MediaEncoder and have it expose a set of AudioTrackEncoders and VideoTrackEncoders. Make the current MediaEncoder into private helper class (say MediaEncoderListener) of MediaRecorder.
Flags: needinfo?(roc)
Reporter | ||
Comment 3•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #2)
> A few quick comments:
> -- Internally we shouldn't need generic parameter structures like
> CodecParams. We can specify exactly the set of parameters we support.
I designed it this way to utilize some feature provided by codec. And since I am not sure how detail MediaRecord API would request encoder, I think this may provide some flexibility for future extension.
> -- A comment for each class describing what that class is for would be
> really helpful. In particular I'm not sure what the role of the
> MediaSegmentAdapter classes is.
I added a comment for each classes in the wiki page now :)
A MediaSegmentAdpater is basically a queue. The idea is similar to Pad in GStreamer or Port in OpenMAX.
> -- What's status_t? We should try to minimize the number of methods that
> return an error.
changed status_t to nsresult.
> -- Instead of "Codec", I think "Encoder" is a better name. "Codec" could
> mean encoding or decoding.
> -- I suggest creating AudioTrackEncoder interface and VideoTrackEncoder
> interfaces instead of MediaCodec. Change MediaWriter to MediaEncoder and
> have it expose a set of AudioTrackEncoders and VideoTrackEncoders. Make the
> current MediaEncoder into private helper class (say MediaEncoderListener) of
> MediaRecorder.
Naming changed to fulfill your suggestion. For MediaEncoder, I will discuss with Randy and update that part later.
Thanks for your precious suggestions!!!
Reporter | ||
Comment 4•12 years ago
|
||
(In reply to Chiajung Hung [:chiajung] from comment #3)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #2)
> > A few quick comments:
> > -- Internally we shouldn't need generic parameter structures like
> > CodecParams. We can specify exactly the set of parameters we support.
> I designed it this way to utilize some feature provided by codec. And since
> I am not sure how detail MediaRecord API would request encoder, I think this
> may provide some flexibility for future extension.
>
> > -- A comment for each class describing what that class is for would be
> > really helpful. In particular I'm not sure what the role of the
> > MediaSegmentAdapter classes is.
> I added a comment for each classes in the wiki page now :)
>
> A MediaSegmentAdpater is basically a queue. The idea is similar to Pad in
> GStreamer or Port in OpenMAX.
>
> > -- What's status_t? We should try to minimize the number of methods that
> > return an error.
> changed status_t to nsresult.
>
> > -- Instead of "Codec", I think "Encoder" is a better name. "Codec" could
> > mean encoding or decoding.
> > -- I suggest creating AudioTrackEncoder interface and VideoTrackEncoder
> > interfaces instead of MediaCodec. Change MediaWriter to MediaEncoder and
> > have it expose a set of AudioTrackEncoders and VideoTrackEncoders. Make the
> > current MediaEncoder into private helper class (say MediaEncoderListener) of
> > MediaRecorder.
> Naming changed to fulfill your suggestion. For MediaEncoder, I will discuss
> with Randy and update that part later.
After some discussion, MediaRecord will make sure all state valid and ask this class for data and status.
Thus I changed some interface for this class and change its name as you suggested, and I think this can be a private class for MediaRecord.
>
> Thanks for your precious suggestions!!!
(In reply to Chiajung Hung [:chiajung] from comment #3)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #2)
> > A few quick comments:
> > -- Internally we shouldn't need generic parameter structures like
> > CodecParams. We can specify exactly the set of parameters we support.
> I designed it this way to utilize some feature provided by codec. And since
> I am not sure how detail MediaRecord API would request encoder, I think this
> may provide some flexibility for future extension.
If we want to add parameters in the future we can add fields to the Gecko struct.
> > -- A comment for each class describing what that class is for would be
> > really helpful. In particular I'm not sure what the role of the
> > MediaSegmentAdapter classes is.
> I added a comment for each classes in the wiki page now :)
>
> A MediaSegmentAdpater is basically a queue. The idea is similar to Pad in
> GStreamer or Port in OpenMAX.
A MediaSegment is already just a sequence of chunks. Why can't you just use that, with AppendSlice? Or we could add some APIs to help you if you want to treat it like a queue.
I don't know why you need GrallocAdapter. Image objects are reference counted and VideoSegment does not need to copy any images.
Reporter | ||
Comment 6•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #5)
> (In reply to Chiajung Hung [:chiajung] from comment #3)
> > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #2)
> > > A few quick comments:
> > > -- Internally we shouldn't need generic parameter structures like
> > > CodecParams. We can specify exactly the set of parameters we support.
> > I designed it this way to utilize some feature provided by codec. And since
> > I am not sure how detail MediaRecord API would request encoder, I think this
> > may provide some flexibility for future extension.
>
> If we want to add parameters in the future we can add fields to the Gecko
> struct.
OK, then let me remove that part.
>
> > > -- A comment for each class describing what that class is for would be
> > > really helpful. In particular I'm not sure what the role of the
> > > MediaSegmentAdapter classes is.
> > I added a comment for each classes in the wiki page now :)
> >
> > A MediaSegmentAdpater is basically a queue. The idea is similar to Pad in
> > GStreamer or Port in OpenMAX.
>
> A MediaSegment is already just a sequence of chunks. Why can't you just use
> that, with AppendSlice? Or we could add some APIs to help you if you want to
> treat it like a queue.
Hmm...sounds OK for me, but we may need some mechanism to make sure it is thread safe, right?
>
> I don't know why you need GrallocAdapter. Image objects are reference
> counted and VideoSegment does not need to copy any images.
I think we may need copy frame in some situation. For example, the VideoFrame from camera backed with GraphicBuffer, which is inside a ring buffer of Camera. When the VideoFrame passed into MediaSegment, Camera can not reuse the buffer and if the encoder very slow, Camera must stop to wait for encoding.
But if we can keep some frame copied, and do not block Camera, then we can decouple them slightly better. (Though we still need set a upper bound)
I will discuss with others and update wiki page based on your suggestion later.
Thanks!
(In reply to Chiajung Hung [:chiajung] from comment #6)
> Hmm...sounds OK for me, but we may need some mechanism to make sure it is
> thread safe, right?
Yes, but a mutex in MediaEncoder should be enough to take care of that.
> > I don't know why you need GrallocAdapter. Image objects are reference
> > counted and VideoSegment does not need to copy any images.
> I think we may need copy frame in some situation. For example, the
> VideoFrame from camera backed with GraphicBuffer, which is inside a ring
> buffer of Camera. When the VideoFrame passed into MediaSegment, Camera can
> not reuse the buffer and if the encoder very slow, Camera must stop to wait
> for encoding.
OK. This is tricky because of course you would normally want to avoid making copies, if the encoder is fast enough.
Can you handle this inside the camera code? e.g. if the camera needs a new buffer and doesn't have one, then and only then have it copy the GraphicBuffer, storing the copy of the data into the existing Image and stealing the Image's GraphicBuffer back to the camera?
In general we want Gecko code to be able to take and hold references to Image objects indefinitely. If the camera code can't allow that to happen to its GraphicBuffers, then we have to find a way to decouple Images from GraphicBuffers without changing all Image-using Gecko code.
Reporter | ||
Comment 9•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #8)
> In general we want Gecko code to be able to take and hold references to
> Image objects indefinitely. If the camera code can't allow that to happen to
> its GraphicBuffers, then we have to find a way to decouple Images from
> GraphicBuffers without changing all Image-using Gecko code.
The camera in Android world is a OpenMAX component, when you setup a native window for it, it will set buffer count and use those buffers as ring buffer.
In fact, Camera will not allow you lock the buffer indefinitely. When that happens, it just stop for waiting. That's why that when some guys try to port B2G to some platform and argue the camera just hanged with only 1 frame shown.
And different camera implementation may need different amount of GraphicBuffer return to its ring before it send another frame. We had met one that need return 1 GraphicBuffer when it output 1 which is different from Unagi which will output many until no buffer available.
That causes problem since StageFright normally queue all the buffer into OpenMAX DL on start, we would expect the Unagi case but some camera just not do that and hang there. I think some encoder/decoder implementation maybe similar if you specify the codec to use GraphicBuffer as its output.
And I think there is no way to copy-on-need since camera just call nativeWindow->dequeue and then wait for some condition there, so only the nativeWindow knows that (GonkNativeWindow this case). If we want to copy-on-need, we have to make the nativeWindow tell its buffering state and let the encoder query that.
But I think it is infeasible at all. Since the MediaEncoder side should only view a MediaStream but do not know what the backend is. As a result, we should not (and can not) find the nativeWindow to query its state.
Or do you think we should find a way to shake ours hand under the table by some means so we can recognize which the MediaStream's backend is:
(1) StageFright with GraphicBuffer (for transcoding case)
(2) Local camera stream from dom/camera or GetUserMedia
(3) Others (decoders other than stagefright, WebRTC input stream or etc.)
and if the case (1) or (2) occurs, we may let the MediaStream reference to the undering nativeWindow (if feasible :P) and take care about the ring buffer state. I am not sure whether this is possible though...:S
So is the solution to make a copy of every frame we get from the camera?
Reporter | ||
Comment 11•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #10)
> So is the solution to make a copy of every frame we get from the camera?
We do not copy the camera frame on Unagi, since it will produce output buffer until no free buffer in its ring.
On the platform only output 1 buffer and wait it return, we had some workaround to produce GraphicBuffer more than request for test but no real solution now. (we simply allow more buffer to be allocated than camera requested, since b2g process will only lock 1 buffer)
To find a correct solution for such Camera hang problem, we will need fix some layer stuff I think.
And the workaround is not feasible for encoder case. Since the encoder may queue all the ring buffers and lock them. (Compositor always locks 1, but encoder may locks all)
Let me rephrase the question. *Should* we copy all the frames we get from the camera?
Reporter | ||
Comment 13•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #12)
> Let me rephrase the question. *Should* we copy all the frames we get from
> the camera?
Basically we *should not*, but I am afraid that we *have to*.
Comment 14•12 years ago
|
||
(In reply to Chiajung Hung [:chiajung] from comment #13)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #12)
> > Let me rephrase the question. *Should* we copy all the frames we get from
> > the camera?
>
> Basically we *should not*, but I am afraid that we *have to*.
When video frame becomes high resolution on mobile phone, video frame copy spend too much cpu time and could not provide product quality.
Comment 15•12 years ago
|
||
(In reply to Chiajung Hung [:chiajung] from comment #9)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #8)
>
> And different camera implementation may need different amount of
> GraphicBuffer return to its ring before it send another frame. We had met
> one that need return 1 GraphicBuffer when it output 1 which is different
> from Unagi which will output many until no buffer available.
>
Which hardware works like this? Can you tell the name?
Comment 16•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #14)
> (In reply to Chiajung Hung [:chiajung] from comment #13)
> > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #12)
> > > Let me rephrase the question. *Should* we copy all the frames we get from
> > > the camera?
> >
> > Basically we *should not*, but I am afraid that we *have to*.
>
> When video frame becomes high resolution on mobile phone, video frame copy
> spend too much cpu time and could not provide product quality.
High-end android phone provide, Full-HD and 60FPS video recording. In such video recording case video frame copy by cpu should not happen.
Comment 17•12 years ago
|
||
Following is my understanding of GonkNativeWindow.
- GonkNativeWindow provides GraphicBuffers to camera hw hal for preview frames. It do not provide buffers for autual video data for Encoding.
- ANativeWindow's implementation of SurfaceTexture in android works in async mode. SurfaceTexture::dequeueBuffer() is not blocked because of out of free buffers. But corresponding GonkNativeWindow works in sync mode. GonkNativeWindow::dequeueBuffer() could take long time by out of free buffers.
In SurfaceTexture, it is the final destination of GraphicBuffer. But GonkNativeWindow is not final destination of GraphicBuffer. It is necessary to forward GraphicBuffers to Compositor. Then it needs to work in sync mode. It could block a preview update thread in camera hw hal.
Reporter | ||
Comment 18•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #14)
> (In reply to Chiajung Hung [:chiajung] from comment #13)
> > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #12)
> > > Let me rephrase the question. *Should* we copy all the frames we get from
> > > the camera?
> >
> > Basically we *should not*, but I am afraid that we *have to*.
>
> When video frame becomes high resolution on mobile phone, video frame copy
> spend too much cpu time and could not provide product quality.
Yes, this is true. But for example, Android camera default to output NV21 color format, which libvpx (for VP8 encoding) do not support. We have to copy-and-convert the color format to feed data into encoder.
Reporter | ||
Comment 19•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #15)
> (In reply to Chiajung Hung [:chiajung] from comment #9)
> > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #8)
> >
> > And different camera implementation may need different amount of
> > GraphicBuffer return to its ring before it send another frame. We had met
> > one that need return 1 GraphicBuffer when it output 1 which is different
> > from Unagi which will output many until no buffer available.
> >
>
> Which hardware works like this? Can you tell the name?
The device I got is a proprietary device for demo only. It's CPU is 8960 I think, and if you want more detail, I can find someone knows that later :)
Reporter | ||
Comment 20•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #16)
> (In reply to Sotaro Ikeda [:sotaro] from comment #14)
> > (In reply to Chiajung Hung [:chiajung] from comment #13)
> > > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #12)
> > > > Let me rephrase the question. *Should* we copy all the frames we get from
> > > > the camera?
> > >
> > > Basically we *should not*, but I am afraid that we *have to*.
> >
> > When video frame becomes high resolution on mobile phone, video frame copy
> > spend too much cpu time and could not provide product quality.
>
> High-end android phone provide, Full-HD and 60FPS video recording. In such
> video recording case video frame copy by cpu should not happen.
Sure. Since camera recorder(camcorder) for Android does some trick under the table which applied to Gonk in current codebase, too.
Basically, Stagefright is a OpenMAX-AL equivalent module, and all component in StageFright is some kind of MediaSource. In camera case, the MediaSource (CameraSource) returns MediaBuffer (backed with a GraphicBuffer from NativeWindow) to hardware codec and encode that buffer.
And since you have no way to ask Android use Software Codec to encode Camera data without hack stagefright, the encoding performance can be promised.
However, we generally want to encode video in free codec such as Theora or VP8 which no hardware support now, we have to take software codec into consideration.
By the way, this framework is not to replace any existing module such as camcorder. It is the backend for MediaRecord API, which would be used on PC, too.
Reporter | ||
Comment 21•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #17)
> Following is my understanding of GonkNativeWindow.
>
> - GonkNativeWindow provides GraphicBuffers to camera hw hal for preview
> frames. It do not provide buffers for autual video data for Encoding.
I think the buffer *should* be the same otherwise camera hardware have to provide 2 stream when recording enabled, isn't it? And in fact the preview and recording is not separable in current codebase. See:
http://mxr.mozilla.org/mozilla-central/source/dom/camera/GonkCameraHwMgr.cpp#417
I will try to double confirm this by some log if I can. (Since the data callback in GonkCameraSource is IMemory, if it is the GraphicBuffer in GonkNativeWindow, the virtual address will remain the same if no lock/unlock(mmap/munmap) operation in between)
>
> - ANativeWindow's implementation of SurfaceTexture in android works in async
This is configurable in Android and default to sync mode. Check SurfaceTextureLayer.cpp under surfaceflinger.
> mode. SurfaceTexture::dequeueBuffer() is not blocked because of out of free
> buffers. But corresponding GonkNativeWindow works in sync mode.
> GonkNativeWindow::dequeueBuffer() could take long time by out of free
> buffers.
> In SurfaceTexture, it is the final destination of GraphicBuffer. But
> GonkNativeWindow is not final destination of GraphicBuffer. It is necessary
> to forward GraphicBuffers to Compositor. Then it needs to work in sync mode.
> It could block a preview update thread in camera hw hal.
I think the difference comes from that:
SurfaceFlinger always ask which GraphicBuffer is the newest, and never lock any buffer until rendering. However, ImageLayer in Gecko usually lock a frame and wait next frame comes! (see ImageBridge)
Since GraphicBuffer *should* always handle multiple-lock-different-usage case, and SurfaceFlinger take this into consideration so SurfaceFlinger can better asynchronous rendering safely than CompositorParent. (Since SurfaceFlinger always lock for USAGE_HW_TEXTURE <-- HW_READ, and Camera always lock for USAGE_HW_RENDER <-- HW_WRITE, the GrapihcBuffer backend (gralloc HAL) should make sure all write operation done when attempt to lock read, vice versa.)
Comment 22•12 years ago
|
||
> > - ANativeWindow's implementation of SurfaceTexture in android works in async
> This is configurable in Android and default to sync mode. Check
> SurfaceTextureLayer.cpp under surfaceflinger.
Sorry, I talked only about Camera and Video playback. It is the only usecases for GonkNativeWindow. And in the use cases, it is always async, if hw platform supports async.
http://androidxref.com/4.0.4/xref/frameworks/base/services/surfaceflinger/SurfaceTextureLayer.cpp#104
Reporter | ||
Comment 23•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #22)
> > > - ANativeWindow's implementation of SurfaceTexture in android works in async
> > This is configurable in Android and default to sync mode. Check
> > SurfaceTextureLayer.cpp under surfaceflinger.
>
> Sorry, I talked only about Camera and Video playback. It is the only
> usecases for GonkNativeWindow. And in the use cases, it is always async, if
> hw platform supports async.
>
> http://androidxref.com/4.0.4/xref/frameworks/base/services/surfaceflinger/
> SurfaceTextureLayer.cpp#104
Oh! right. I think the async mode is the key of better preview FPS on Android than Gonk now :p
Comment 24•12 years ago
|
||
(In reply to Chiajung Hung [:chiajung] from comment #21)
> (In reply to Sotaro Ikeda [:sotaro] from comment #17)
> > Following is my understanding of GonkNativeWindow.
> >
> > - GonkNativeWindow provides GraphicBuffers to camera hw hal for preview
> > frames. It do not provide buffers for autual video data for Encoding.
> I think the buffer *should* be the same otherwise camera hardware have to
> provide 2 stream when recording enabled, isn't it? And in fact the preview
> and recording is not separable in current codebase. See:
How to provide buffers for preview frames and recording frames is camera hal's implementation discretion. Almost all camera hals use different heaps between preview frames and recording frame. And a heap of recording frame is allocated in camera hal. In qcom ics chocolate's(unagi) case, it is allocated in QualcommCameraHardware::initRecord().
https://www.codeaurora.org/gitweb/quic/la/?p=platform/hardware/qcom/camera.git;a=blob;f=QualcommCameraHardware.cpp;h=7ba1b2936410e76f358c5dad165a72491c5bbd00;hb=refs/heads/ics_chocolate#l6926
So, camera hal provide two different video streams. Even in this configration. It is not easy to provide high quality video recoridng.
If recording frames and preview frames share same buffers. preview task could easily block recoridng task. I saw this configration in early android platform.
And it caused serious defect of video recording.
Reporter | ||
Comment 25•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #24)
> (In reply to Chiajung Hung [:chiajung] from comment #21)
> > (In reply to Sotaro Ikeda [:sotaro] from comment #17)
> > > Following is my understanding of GonkNativeWindow.
> > >
> > > - GonkNativeWindow provides GraphicBuffers to camera hw hal for preview
> > > frames. It do not provide buffers for autual video data for Encoding.
> > I think the buffer *should* be the same otherwise camera hardware have to
> > provide 2 stream when recording enabled, isn't it? And in fact the preview
> > and recording is not separable in current codebase. See:
>
> How to provide buffers for preview frames and recording frames is camera
> hal's implementation discretion. Almost all camera hals use different heaps
> between preview frames and recording frame. And a heap of recording frame is
> allocated in camera hal. In qcom ics chocolate's(unagi) case, it is
> allocated in QualcommCameraHardware::initRecord().
>
> https://www.codeaurora.org/gitweb/quic/la/?p=platform/hardware/qcom/camera.
> git;a=blob;f=QualcommCameraHardware.cpp;
> h=7ba1b2936410e76f358c5dad165a72491c5bbd00;hb=refs/heads/ics_chocolate#l6926
>
> So, camera hal provide two different video streams. Even in this
> configration. It is not easy to provide high quality video recoridng.
>
> If recording frames and preview frames share same buffers. preview task
> could easily block recoridng task. I saw this configration in early android
> platform.
> And it caused serious defect of video recording.
Apparently I was wrong. Thanks for the information!
Then, it would not a good idea to use MediaRecord API to encode high quality camera stream. (And it is not our goal :))
But it is possible to get camera preview stream from GetUserMedia directly, and use the preview stream as input of MediaRecord. In this case, we would use the preview frame from GonkNativeWindow to encode. And to not block preview and/or convert the format to codec compatible in this case, frame-by-frame copy/convert may be inevitable.
Comment 26•12 years ago
|
||
Android also use preview frame for encoding when using video effect. mcs uses OpenGL for rendering and video effect.
http://androidxref.com/4.2.2_r1/xref/frameworks/base/media/mca/
http://androidxref.com/4.2.2_r1/xref/frameworks/av/media/libstagefright/SurfaceMediaSource.cpp
The video frame for encoding do not use GraphicBuffer, uses IMemory. So it can not bind to Texture. It is the reason, I think.
Comment 27•12 years ago
|
||
One big problem of camera preview frame is timestamp. Camera hal do not provide timestamp. And GonkNativeWindow also do not provide timestamp of preview stream.
Camera hal provides timestamp to recording frame. Normally it is provided by camera driver. In early days of android, qcom's platform got the timestamp within the code of camera hal(user side) and the timestamp was unstable and could not be used for a product.
Comment 28•12 years ago
|
||
In android's SurfaceMediaSource's case, timestamp is provided by SurfaceTexture implementation. The timestamp is assigned when GraphicBuffer is queued to SurfaceTexture. So it could be very different from actual timestamp and could jitter.
Comment 29•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #28)
> In android's SurfaceMediaSource's case, timestamp is provided by
> SurfaceTexture implementation. The timestamp is assigned when GraphicBuffer
> is queued to SurfaceTexture. So it could be very different from actual
> timestamp and could jitter.
It is a timestamp of preview rendering.
Comment 30•12 years ago
|
||
And other problem of preview frame is it is best effort. It do not ensure FPS.
Comment 31•12 years ago
|
||
(In reply to Chiajung Hung [:chiajung] from comment #25)
> (In reply to Sotaro Ikeda [:sotaro] from comment #24)
//snip
> But it is possible to get camera preview stream from GetUserMedia directly,
> and use the preview stream as input of MediaRecord. In this case, we would
> use the preview frame from GonkNativeWindow to encode. And to not block
> preview and/or convert the format to codec compatible in this case,
> frame-by-frame copy/convert may be inevitable.
Yeah, it is inevitable in current hardware and framework. By doing frame copy, GonkNativeWindow need to work in async mode, I think.
Comment 32•12 years ago
|
||
And preview frames are used originally for preview. Current preview has a latency and jitter problem in FirefoxOS. Camera preview needs to be rendered as soon as possible. It provides responsive preview. In android, camera preview is responsive. But in FirefoxOS, it has a noticeable lag. It is Bug 844248.
Comment 33•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #31)
> (In reply to Chiajung Hung [:chiajung] from comment #25)
> > (In reply to Sotaro Ikeda [:sotaro] from comment #24)
> //snip
> > But it is possible to get camera preview stream from GetUserMedia directly,
> > and use the preview stream as input of MediaRecord. In this case, we would
> > use the preview frame from GonkNativeWindow to encode. And to not block
> > preview and/or convert the format to codec compatible in this case,
> > frame-by-frame copy/convert may be inevitable.
>
> Yeah, it is inevitable in current hardware and framework. By doing frame
> copy, GonkNativeWindow need to work in async mode, I think.
In current camera app's use cases, it needs to avoid preview frame copy. If we want to provide high quality video recording. All frame copy needs to be prevented. There is no cpu time for it.
Reporter | ||
Comment 34•12 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #33)
> (In reply to Sotaro Ikeda [:sotaro] from comment #31)
> > (In reply to Chiajung Hung [:chiajung] from comment #25)
> > > (In reply to Sotaro Ikeda [:sotaro] from comment #24)
> > //snip
> > > But it is possible to get camera preview stream from GetUserMedia directly,
> > > and use the preview stream as input of MediaRecord. In this case, we would
> > > use the preview frame from GonkNativeWindow to encode. And to not block
> > > preview and/or convert the format to codec compatible in this case,
> > > frame-by-frame copy/convert may be inevitable.
> >
> > Yeah, it is inevitable in current hardware and framework. By doing frame
> > copy, GonkNativeWindow need to work in async mode, I think.
>
> In current camera app's use cases, it needs to avoid preview frame copy. If
> we want to provide high quality video recording. All frame copy needs to be
> prevented. There is no cpu time for it.
Sure, we want to prevent any frame-by-frame copy if possible. The only possibility I can think in this case is try to use hardware codec with GraphicBuffer (attach to a MediaBuffer) and see whether it works.
In this case, GonkNativeWindow must work in async mode. Otherwise any buffer queued for encode must be locked.
The last problem is that we may want to support some codec that no hardware codec available. As MediaRecord API does not indicate which codec we must support, we may query hardware then use hardware codec only for video case on FirefoxOS.
Assignee | ||
Comment 35•12 years ago
|
||
Draft of Encoder Framework.
Assignee | ||
Updated•12 years ago
|
Assignee: nobody → slin
Assignee | ||
Updated•12 years ago
|
Component: General → Video/Audio
Product: Boot2Gecko → Core
Assignee | ||
Comment 36•12 years ago
|
||
There are still some threading issue at MediaRecorder, so thus I leave the |Shutdown()| empty, or maybe we don't even need that, I think whoever owns the MediaEncoder shold handle the ref count.
All files are inside content/media, any suggestion of the folders and placing?
Flags: needinfo?(roc)
Comment 37•12 years ago
|
||
Comment on attachment 732637 [details] [diff] [review]
draft v1: Media Encoder Framework
Review of attachment 732637 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/MediaEncoder.cpp
@@ +71,5 @@
> +}
> +
> +nsresult
> +MediaEncoder::Init()
> +{
Take care double init here.
Add CleanUp function to clear aquire resource.
@@ +104,5 @@
> + unsigned char header[HEADER_BUFFER_LEN];
> + int headerBufferLength = 0;
> +
> + int duration = 0;
> + int outLen = 0;
Move these two variable into ENCODE_DATA case statement.
@@ +126,5 @@
> + }
> + break;
> +
> + case ENCODE_DATA:
> + mState = MediaEncoder::ENCODING;
Instead change state directly, add stateChange function.
In debug version, add some code to validate state change in stateChange function.
::: content/media/OggWriter.cpp
@@ +6,5 @@
> +namespace mozilla {
> +
> +nsresult
> +OggWriter::Init()
> +{
1. Client side may call Init function twice. Prevent double init here.
(Call ogg_stream_init twice cause memory leak)
Assertion crash or mInitiated boolean member
2. Make sure release resouce allocate by ogg. Such as
mOggStreamState->body_data
mOggStreamState->laning_values
3. Initiate data members here, such as
mShouldFlush = false
@@ +21,5 @@
> + return NS_OK;
> +}
> +
> +void
> +OggWriter::WriteEncodedData(unsigned char *buffer, int length, int duration) {
Assert(buffer != null & length !=0)
(In reply to Shelly Lin [:shelly] from comment #36)
> All files are inside content/media, any suggestion of the folders and
> placing?
content/media/encoder sounds good to me.
Flags: needinfo?(roc)
Assignee | ||
Comment 39•12 years ago
|
||
Attachment #732637 -
Attachment is obsolete: true
Attachment #734505 -
Flags: feedback?(cyu)
Assignee | ||
Comment 40•12 years ago
|
||
(In reply to C.J. Ku[:CJKu] from comment #37)
> Comment on attachment 732637 [details] [diff] [review]
> draft v1: Media Encoder Framework
>
> Review of attachment 732637 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: content/media/MediaEncoder.cpp
> @@ +71,5 @@
> > +}
> > +
> > +nsresult
> > +MediaEncoder::Init()
> > +{
>
> Take care double init here.
The following implementation is for preventing double initialization in the new patch.
if (GetCurrentState() != MediaEncoder::IDLE) {
return NS_ERROR_ALREADY_INITIALIZED;
}
> Add CleanUp function to clear aquire resource.
I'm not sure if we really need a CleanUp function in the encoder, all resource and data will be init at constructor and its Init funciton, and whoever owns this encoder should responsible for releasing it.
> ::: content/media/OggWriter.cpp
> @@ +6,5 @@
> > +namespace mozilla {
> > +
> > +nsresult
> > +OggWriter::Init()
> > +{
>
> 1. Client side may call Init function twice. Prevent double init here.
> (Call ogg_stream_init twice cause memory leak)
>
> Assertion crash or mInitiated boolean member
A boolean flag |mInitialized| is used to prevent double initialization.
>
> 2. Make sure release resouce allocate by ogg. Such as
> mOggStreamState->body_data
> mOggStreamState->laning_values
>
ogg_stream_destroy() is called in the destructor of OggWriter.
> 3. Initiate data members here, such as
> mShouldFlush = false
>
|mShouldFlush| is already initialized in constructor, why should we init this parameter here again?
> @@ +21,5 @@
> > + return NS_OK;
> > +}
> > +
> > +void
> > +OggWriter::WriteEncodedData(unsigned char *buffer, int length, int duration) {
>
> Assert(buffer != null & length !=0)
Already added error handling in the |WriteEncodedData|.
Assignee | ||
Comment 41•12 years ago
|
||
Added ogg_stream_destroy() in the destructor of OggWriter.
Attachment #734505 -
Attachment is obsolete: true
Attachment #734505 -
Flags: feedback?(cyu)
Attachment #734523 -
Flags: feedback?(cyu)
Attachment #734523 -
Flags: feedback?(cku)
Comment 42•12 years ago
|
||
Comment on attachment 734523 [details] [diff] [review]
Draft 2: Media Encoder Framework
I am concerned about holding mReentrantMonitor for too long. Encoding while holding the monitor is not a good idea since the encoder could be CPU intensive. It will block MediaStreamListener callbacks. If shared data need to be accessed by more than one threads, make sure we do fast operations (like moving references to the buffers) while holding the monitor and then release the monitor to do slow operations.
Attachment #734523 -
Flags: feedback?(cyu)
Comment 43•12 years ago
|
||
Comment on attachment 734523 [details] [diff] [review]
Draft 2: Media Encoder Framework
Review of attachment 734523 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/MediaEncoder.h
@@ +18,5 @@
> +
> +class AudioTrackEncoder;
> +class VideoTrackEncoder;
> +class MediaWriter;
> +
More comment here to describe object relation among
MediaEncoder/ CodecBase/ Writer.
Reveal basic design principle here for successors.
@@ +209,5 @@
> + VideoTrackEncoder() : CodecEncoderBase() {};
> + virtual ~VideoTrackEncoder() {};
> +protected:
> +};
> +
You don't really need to define class here.
pre-declare VideoTrackEncoder before MediaEncoder
class VideoTrackEncoder;
::: content/media/OggWriter.h
@@ +16,5 @@
> +public:
> + OggWriter(AudioTrackEncoder* aAudioEnc, VideoTrackEncoder* aVideoEnc)
> + : MediaWriter(aAudioEnc, aVideoEnc)
> + , mShouldFlush(false)
> + {}
memset(&mOggStreamState, 0, sizeof(ogg_stream_state));
::: content/media/OpusEncoder.cpp
@@ +10,5 @@
> +#include "VideoUtils.h"
> +
> +#include "opus/opus.h"
> +#include <vector>
> +
Is there need to include this header?
@@ +41,5 @@
> + OpusHeaderInt(const OpusHeaderInt<T> & other)
> + {
> + mValue = other.mValue;
> + }
> +
Hide default constructor, by putting in private section
@@ +49,5 @@
> + {
> + return mValue;
> + }
> +
> + size_t GetSize()
size_t GetSize() const
@@ +54,5 @@
> + {
> + return sizeof(T);
> + }
> +
> + void SerializeToBuffer(unsigned char *buff, int bufflen, int &outLen)
const function as well
@@ +184,5 @@
> +private:
> + class OpusHeaderString {
> + public:
> + OpusHeaderString(char *src)
> + : mLength(src ? strlen(src) : 0)
mData(nullptr)
@@ +186,5 @@
> + public:
> + OpusHeaderString(char *src)
> + : mLength(src ? strlen(src) : 0)
> + {
> + mData = strdup(src);
With mLength setup, you may use strndup(performance concern)
if (mLength)
mData = strndup(src, mLength)
@@ +198,5 @@
> +
> + ~OpusHeaderString()
> + {
> + mLength = (uint32_t) (0xdeadbaad);
> + free(mData);
if (mData)
free(mData);
@@ +240,5 @@
> +{
> +}
> +
> +OpusEncoder::~OpusEncoder()
> +{
if (mEncoder)
@@ +304,5 @@
> + return kFrameSizeSamples / mRateMultiplier;
> +}
> +
> +bool
> +OpusEncoder::RequiresFlush()
have to overwrite this function here?
::: content/media/OpusEncoder.h
@@ +27,5 @@
> + virtual void NotifyEndOfStream();
> + virtual bool RequiresFlush();
> + virtual int GetFrameSize();
> +
> + nsresult Init();
Should be a private member function
Attachment #734523 -
Flags: feedback?(cku)
Comment 44•12 years ago
|
||
Overall, code quality looks good.
Comment 45•12 years ago
|
||
Comment on attachment 734523 [details] [diff] [review]
Draft 2: Media Encoder Framework
Review of attachment 734523 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/OggWriter.h
@@ +10,5 @@
> +#include "ogg/ogg.h"
> +
> +namespace mozilla {
> +
> +class OggWriter : public MediaWriter
Please put these Ogg specific files in content/media/ogg/
@@ +22,5 @@
> + ~OggWriter() {
> + ogg_stream_destroy(&mOggStreamState);
> + }
> +
> + virtual nsresult Init();
Functions that override don't need a "virtual" keyword, but it's a good idea to have MOZ_OVERRIDE at the end, i.e. please make that:
nsresult Init() MOZ_OVERRIDE;
Assignee | ||
Comment 46•12 years ago
|
||
Thanks for all your feedback :)
This new patch not only revise the coding style, but also move the encoder related files to content/media/encoder, and ogg files to content/media/ogg.
TODO list for this patch:
1. Revise the current implementation of resource locking (per :cyu's feedback).
2. Settle down a threading module with MediaRecorder, the original design of encoder state machine is "non-blocking".
3. Future expansion, the protocal bwterrn codec necoder and container writer.
4. More comments.
Assignee | ||
Updated•12 years ago
|
Attachment #736687 -
Flags: feedback?(cyu)
Attachment #736687 -
Flags: feedback?(cku)
Assignee | ||
Comment 47•12 years ago
|
||
Typo fix.
3. Future expansion, the protocal between* codec necoder and container writer.
Assignee | ||
Comment 48•12 years ago
|
||
Comment on attachment 736687 [details] [diff] [review]
v1: Media Encoder Framework
Review of attachment 736687 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusEncoder.cpp
@@ +201,5 @@
> +
> + ~OpusHeaderString()
> + {
> + mLength = (uint32_t) (0xdeadbaad);
> + free(mData);
Forgot to add |if (mData)|, will fix it next time.
Assignee | ||
Comment 49•12 years ago
|
||
Comment on attachment 734523 [details] [diff] [review]
Draft 2: Media Encoder Framework
Review of attachment 734523 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/OggWriter.h
@@ +16,5 @@
> +public:
> + OggWriter(AudioTrackEncoder* aAudioEnc, VideoTrackEncoder* aVideoEnc)
> + : MediaWriter(aAudioEnc, aVideoEnc)
> + , mShouldFlush(false)
> + {}
memset is actually called inside the ogg_stream_init(), so I've moved its Init() function to constructor.
::: content/media/OpusEncoder.cpp
@@ +10,5 @@
> +#include "VideoUtils.h"
> +
> +#include "opus/opus.h"
> +#include <vector>
> +
<vector> is needed in the OpusHeaderString.
Comment 50•12 years ago
|
||
- Please remove the test code that's not supposed to be checked in from the patch.
- Don't use printf() here. If you think the messages are important to be noticed by the user, then we should use portable logging API (like PR_Log). If it's only used for debugging purpose, please be sure to remove it.
- Using ReentrantMonitorAutoEnter and ReentrantMonitorAutoExit in the same method doesn't look correct.
I originally though the encoder should work nonblockingly as a passive component. But doing this requires some notification mechanism to the caller so the caller can know when more data can be read from the encoder. Otherwise, the caller should poll periodically. This will make things more complicated. So let's just use monitor for waiting and notifying between threads to make things simpler.
Assignee | ||
Comment 51•12 years ago
|
||
Add a |mSourceSegment| for OpusEncoder to process interleave. |mSourceSegment| will pull at least some needed amount of raw data from |mRawSegment| before the actual encoding procudure, and the reentrant monitor only obtains the lock during this period.
Attachment #737421 -
Flags: feedback?(cyu)
Assignee | ||
Comment 52•12 years ago
|
||
Comment on attachment 736687 [details] [diff] [review]
v1: Media Encoder Framework
Review of attachment 736687 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.cpp
@@ +87,5 @@
> + int headerBufferLength = 0;
> + res = mAudioEncoder->GetHeader(header, sizeof(header), headerBufferLength);
> + NS_ENSURE_SUCCESS(res, NS_OK);
> + res = mWriter->WriteEncodedData(header, headerBufferLength, 0);
> + NS_ENSURE_SUCCESS(res, NS_OK);
Hey Cervantas,
Is it because the header of Ogg ought to be "IdHeader"+"CommentHeader" and a flush, so that we put a |EOCODE_HEADER| action first in the Init function? If so, should this block of implementation be sprcific to Ogg?
Assignee | ||
Comment 53•12 years ago
|
||
Comment on attachment 736687 [details] [diff] [review]
v1: Media Encoder Framework
Review of attachment 736687 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.h
@@ +128,5 @@
> +
> + // Holds a reference to the worker thread.
> + void SetEncoderThread(nsIThread* aThread) {
> + if (!mEncoderThread) {
> + mEncoderThread = aThread;
Hi Cervantas, is this the correct way of holding a reference of a thread? mEncoderThread is set to nullptr in the destructor of MediaEncoder.
Assignee | ||
Comment 54•12 years ago
|
||
Apply the fix in "v1: Media Encoder Framework, a fix patch only"; Reducing the locking period of resource locking.
Threading modle seems to be solid as it is now.
More comments and clean ups.
TODO:
1. Needs a flush after marking eos to an ogg packet.
2. Make sure that no more packet is appended after eos.
Comment 55•12 years ago
|
||
Comment on attachment 737421 [details] [diff] [review]
v1: Media Encoder Framework, a fix patch only
Review of attachment 737421 [details] [diff] [review]:
-----------------------------------------------------------------
This looks better than the original.
Attachment #737421 -
Flags: feedback?(cyu) → feedback+
Comment 56•12 years ago
|
||
(In reply to Shelly Lin [:shelly] from comment #52)
> Comment on attachment 736687 [details] [diff] [review]
> v1: Media Encoder Framework
>
> Review of attachment 736687 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: content/media/encoder/MediaEncoder.cpp
> @@ +87,5 @@
> > + int headerBufferLength = 0;
> > + res = mAudioEncoder->GetHeader(header, sizeof(header), headerBufferLength);
> > + NS_ENSURE_SUCCESS(res, NS_OK);
> > + res = mWriter->WriteEncodedData(header, headerBufferLength, 0);
> > + NS_ENSURE_SUCCESS(res, NS_OK);
>
> Hey Cervantas,
>
> Is it because the header of Ogg ought to be "IdHeader"+"CommentHeader" and a
> flush, so that we put a |EOCODE_HEADER| action first in the Init function?
> If so, should this block of implementation be sprcific to Ogg?
This part of code looks strange because Init() performs work belonging to GetEncodedData(). We should remove this duplication.
Assignee | ||
Comment 57•12 years ago
|
||
Thanks Cervantas :D
Memo of this patch:
- Apply the fix in "v1: Media Encoder Framework, a fix patch only"; Reducing the locking period of resource locking.
- Seems like the MediaWriter doesn't need to hold a reference of AudioTrackEncoder and VideoTrackEncoder, so remove it.
- Instead, add an argument "eos(end of stream)" to function |WriteEncodedData| of MediaWriter.
- Ogg handles eos and force a flush at page out, so no need to force a flush on our self.
Attachment #734523 -
Attachment is obsolete: true
Attachment #736687 -
Attachment is obsolete: true
Attachment #737896 -
Attachment is obsolete: true
Attachment #736687 -
Flags: feedback?(cyu)
Attachment #736687 -
Flags: feedback?(cku)
Assignee | ||
Comment 58•12 years ago
|
||
This patch adds a class EncoderTraits, which creates the correspondent MediaWriter and Encoder, based on the required ouput type of media container.
Attachment #740185 -
Flags: feedback?(cku)
Comment 59•12 years ago
|
||
Hi Shelly,
Could you rebase it because latest Mozilla central move the EXPORTS *.h define from Makefile.in to moz.build.
Assignee | ||
Comment 60•12 years ago
|
||
- Rebase to 4/23 m-c.
- Add PR_Log.
- Remove the header implementation in Init() function.
Attachment #737421 -
Attachment is obsolete: true
Attachment #738880 -
Attachment is obsolete: true
Assignee | ||
Comment 61•12 years ago
|
||
Hi roc, I think this encoder patch is ready to review, could you give me some feedback or review it at your convenience? Thanks a lot.
Attachment #740734 -
Attachment is obsolete: true
Attachment #740751 -
Flags: review?(roc)
Attachment #740751 -
Flags: feedback?(roc)
Assignee | ||
Comment 62•12 years ago
|
||
Hi roc, this patch is basically the same as last one, excepts the part of logging. I've removed PR_Log since the main developing platform of this feature is b2g.
Attachment #740751 -
Attachment is obsolete: true
Attachment #740751 -
Flags: review?(roc)
Attachment #740751 -
Flags: feedback?(roc)
Attachment #741200 -
Flags: review?(roc)
Attachment #741200 -
Flags: feedback?(roc)
Comment on attachment 741200 [details] [diff] [review]
v3: Media Encoder Framework
Review of attachment 741200 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/EncoderTraits.h
@@ +16,5 @@
> +
> +class EncoderTraits {
> +public:
> + static MediaWriter* CreateWriter(const nsACString& aType);
> + static AudioTrackEncoder* CreateAudioTrackEncoder();
Only need one space after "static"
These methods need to be documented. Especially the format of aType needs to be documented!
Doesn't CreateAudioTrackEncoder need to be documented?
It seems to me we should have a single method to create the MediaWriter and all the track encoders together, which takes as input the MIME type (which may be empty if we're using defaults) and some information about what sorts of tracks are (or will be) present in the input stream. In fact, I think you should just have a CreateEncoder method which returns a fully configured MediaEncoder.
Currently you're setting some global state in CreateWriter and using it in CreateAudioTrackEncoder, which is a very ugly way to do it.
::: content/media/encoder/MediaEncoder.h
@@ +33,5 @@
> + * to Wait() in the worker thread if insufficient raw data, and Notify() in the
> + * thread of MediaStreamGraph if we have appended enough data.
> + *
> + * MediaEncoder holds a reference of the worker thread to ensure that this
> + * thread is not killed before finishing the encoding job.
This comment is very good. We can clean up the English a bit later.
@@ +71,5 @@
> +// virtual void NotifyPull(MediaStreamGraph* aGraph, StreamTime aDesiredTime) {}
> +// virtual void NotifyBlockingChanged(MediaStreamGraph* aGraph, Blocking aBlocked) {}
> +// virtual void NotifyHasCurrentData(MediaStreamGraph* aGraph, bool aHasCurrentData) {}
> +// virtual void NotifyOutput(MediaStreamGraph* aGraph) {}
> +// virtual void NotifyFinished(MediaStreamGraph* aGraph) {};
You can just remove these commented-out methods.
@@ +102,5 @@
> + * | | |
> + * | |---------> ----------------|-------------------|
> + * | no data (error) | v
> + * |----------<----------------------|----->------> ENCODE_DONE --> return
> + * !eof eof
This diagram is not as clear. It might be clearer if you wrote out in pseudocode the algorithm that executes on the worker thread.
In particular it's not clear what function of each method is. They should be documented.
@@ +104,5 @@
> + * | no data (error) | v
> + * |----------<----------------------|----->------> ENCODE_DONE --> return
> + * !eof eof
> + */
> + void GetEncodedData(unsigned char* buffer, int length, int &encodedLen);
I assume this method is supposed to fill an output buffer with encoded data. It might be simpler to use an nsTArray out-parameter here, i.e.
void GetEncodedData(int aMaxLength, nsTArray<unsigned char>* aOutput);
You can pass an nsAutoTArray in the caller.
@@ +122,5 @@
> + return mState;
> + }
> +
> + // Holds a reference to the worker thread.
> + void SetEncoderThread(nsIThread* aThread) {
One space after "void"
@@ +123,5 @@
> + }
> +
> + // Holds a reference to the worker thread.
> + void SetEncoderThread(nsIThread* aThread) {
> + if (!mEncoderThread) {
Perhaps if mEncoderThread is already set to non-null, that's an error? in that case you should just MOZ_ASSERT(!mEncoderThread).
I wonder though, who calls this? Should the MediaEncoder just set mEncoderThread itself?
@@ +155,5 @@
> + } mEncoderStateInternal;
> +
> + // A reference of a worker thread, this is used to ensure that
> + // |GetEncodedData| is running on a valid thread.
> + nsRefPtr<nsIThread> mEncoderThread;
Don't indent all these field names. I can't even tell why they're indented so far.
@@ +158,5 @@
> + // |GetEncodedData| is running on a valid thread.
> + nsRefPtr<nsIThread> mEncoderThread;
> +};
> +
> +class CodecEncoderBase
This class (and the other classes below in this file) need to be documented. Would it make sense to call this TrackEncoder?
@@ +184,5 @@
> +
> + virtual nsresult GetEncodeData(unsigned char* buffer,
> + int len,
> + int &encodedLen,
> + int &rawDuration) = 0;
What is rawDuration? These methods all need to be documeted.
@@ +225,5 @@
> + virtual nsresult EnsureAudioEncoderPrams(int channels,
> + int samplingRate) = 0;
> +
> + // Return the frame size in numbers of samples.
> + virtual int GetFrameSize() = 0;
Don't indent method names.
@@ +232,5 @@
> + // The number of channels in this raw audio data.
> + int mChannels;
> +
> + // The segment of raw audio data.
> + AudioSegment* mRawSegment;
nsAutoPtr<AudioSegment>?
::: content/media/encoder/MediaWriter.h
@@ +9,5 @@
> +#include "nsTArray.h"
> +
> +namespace mozilla {
> +
> +class MediaWriter {
Need comments explaining what this class is for.
@@ +17,5 @@
> + {}
> + virtual ~MediaWriter() {}
> +
> + virtual nsresult WriteEncodedData(unsigned char *buffer, int length,
> + int duration, bool eos) = 0;
Need comments explaining these parameters.
::: content/media/encoder/OpusEncoder.cpp
@@ +28,5 @@
> +InterleaveAndConvertBuffer(const void** aSourceChannels,
> + AudioSampleFormat aSourceFormat,
> + int32_t aLength, float aVolume,
> + int32_t aChannels,
> + AudioDataValue* aOutput);
Include a header file instead of having your own extern.
::: content/media/ogg/OggWriter.h
@@ +10,5 @@
> +#include <ogg/ogg.h>
> +
> +namespace mozilla {
> +
> +class OggWriter : public MediaWriter
Need comments.
@@ +30,5 @@
> +
> + void Flush(bool require) MOZ_OVERRIDE;
> +
> + void GetPacket(unsigned char *buffer, int length, int &outLength)
> + MOZ_OVERRIDE;
Don't indent method names.
I'm going to be away for a few days; I hope these comments are helpful in the meantime.
Comment on attachment 741200 [details] [diff] [review]
v3: Media Encoder Framework
Review of attachment 741200 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.h
@@ +33,5 @@
> + * to Wait() in the worker thread if insufficient raw data, and Notify() in the
> + * thread of MediaStreamGraph if we have appended enough data.
> + *
> + * MediaEncoder holds a reference of the worker thread to ensure that this
> + * thread is not killed before finishing the encoding job.
Actually one thing we need to be added to this comment is a description of the threading model. Should all methods for a given MediaEncoder be called from the same thread? Which thread --- can it be any thread?
Assignee | ||
Comment 66•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #65)
> Comment on attachment 741200 [details] [diff] [review]
> v3: Media Encoder Framework
>
> Review of attachment 741200 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: content/media/encoder/MediaEncoder.h
> @@ +33,5 @@
> > + * to Wait() in the worker thread if insufficient raw data, and Notify() in the
> > + * thread of MediaStreamGraph if we have appended enough data.
> > + *
> > + * MediaEncoder holds a reference of the worker thread to ensure that this
> > + * thread is not killed before finishing the encoding job.
>
> Actually one thing we need to be added to this comment is a description of
> the threading model. Should all methods for a given MediaEncoder be called
> from the same thread? Which thread --- can it be any thread?
Agreed, but I think we need to clarify the threading model first, and here's my proposal:
1. Since this is a blocking component, all methods of MediaEncoder can be called from any thread *except main thread.
2. Currently, the AudioRecorder creates a worker thread to run the encoder, but doesn't *manage the thread, it could cause problems if we shut down this worker thread prior to the running task. This is why I tried *referencing the worker thread from its creater, apparently it's bad and wrong. I saw your comment in bug 803414 (https://bugzilla.mozilla.org/show_bug.cgi?id=803414#c62), and thanks for pointing that out. My idea is who ever uses the MediaEncoder component, should responsible for thread creation and management.
Loop in Randy for this discussion.
Flags: needinfo?(rlin)
Comment 67•12 years ago
|
||
I think let client to manager this work thread is fine. But the client may need to know how to deal the error/warning status if something happen in the encoder.
Flags: needinfo?(rlin)
Would it work if we said that a MediaEncoder should only be used from a single thread (not the main thread)? That would make things very simple.
Assignee | ||
Comment 69•12 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #68)
> Would it work if we said that a MediaEncoder should only be used from a
> single thread (not the main thread)? That would make things very simple.
Yeah, sounds good to me. Besides documenting the threading design, adding an assertion of |MOZ_ASSERT(!NS_IsMainThread());| should help clarifying the usage, or even more, a sanity check in debug build like the following,
if (!mThread)
mThread = NS_GetCurrentThread();
MOZ_ASSERT(mThread == NS_GetCurrentThread());
Comment 70•12 years ago
|
||
To clarify, MediaEncoder is designed as a passive component and to be agnostic over threading issues, like ogg or opus libraries. For the methods that there might be race conditions we use a monitor to make it thread safe (e.g. between MediaStreamListener callback methods and other methods). Other than that, we don't specify what the on which thread the MediaEncoder should run. The user of the MediaEncoder should handle the threading issue.
Because of the use of a monitor, the call to get encoded data from the MediaEncoder might block. The MediaEncoder should not run on a thread that forbids blocking, such as the main thread or the IO thread. The typical usage is to create a thread to do the heavy lifting and block to wait for input data and make the encoder only run on the thread.
Comment 71•12 years ago
|
||
Comment on attachment 741200 [details] [diff] [review]
v3: Media Encoder Framework
Review of attachment 741200 [details] [diff] [review]:
-----------------------------------------------------------------
Just wanted to do a drive-by on the Opus parts. I didn't review the whole patch (I left that for roc).
::: content/media/encoder/MediaEncoder.cpp
@@ +64,5 @@
> + AudioSegment::ChunkIterator iter(*audio);
> + while (!iter.IsEnded()) {
> + AudioChunk chunk = *iter;
> + if (chunk.mBuffer) {
> + mAudioEncoder->EnsureAudioEncoderPrams(chunk.mChannelData.Length(),
"Prams" should presumably be "Params".
::: content/media/encoder/MediaEncoder.h
@@ +221,5 @@
> +
> + nsresult AppendAudioSegment(MediaSegment* aSegment);
> +
> + // Ensure that we have valid parameters for this codec encoder.
> + virtual nsresult EnsureAudioEncoderPrams(int channels,
"Prams" should presumably be "Params".
::: content/media/encoder/OpusEncoder.cpp
@@ +268,5 @@
> + // Init the Opus encoder
> +
> + // Pause here if its required parameters haven't initialized yet.
> + ReentrantMonitorAutoEnter mon(mReentrantMonitor);
> + if (mState < READY) {
Monitors are generally subject to spurious wakeups. The "if" should be a "while" to guarantee you don't proceed until mState >= READY.
@@ +283,5 @@
> + return error == OPUS_OK ? NS_OK : NS_ERROR_FAILURE;
> +}
> +
> +nsresult
> +OpusEncoder::EnsureAudioEncoderPrams(int channels, int samplingRate)
"Prams" should presumably be "Params".
@@ +285,5 @@
> +
> +nsresult
> +OpusEncoder::EnsureAudioEncoderPrams(int channels, int samplingRate)
> +{
> + // The track must have at least 1, or 2 channels.
This comment is misleading because you don't actually support 2 channels yet (CHANNELS is hard-coded to 1).
Eventually we'll want to support up to 8 channels using the multistream encoder API.
@@ +303,5 @@
> + mRequiresResample = true;
> +
> + MOZ_ASSERT(samplingRate > 0);
> + mRateMultiplier = kOpusSamplingRate / samplingRate;
> + MOZ_ASSERT(mRateMultiplier > 0);
You're making an assumption here that samplingRate evenly divides 48000, but not verifying it. At the very least you should add:
MOZ_ASSERT(samplingRate * mRateMultiplier == kOpusSamplingRate);
So long as you're already making this assumption, and assuming that the sample rate is at least 8 kHz, you should use the OPUS_SET_MAX_BANDWIDTH() ctl to tell the encoder what the maximum audio bandwidth of the input is (http://www.opus-codec.org/docs/html_api-1.0.2/group__opus__encoderctls.html#ga4f88288e89c595c07c61db316cc45289). Otherwise, the zero-order hold you're using for resampling will sound terrible.
However, if you're going to assume the sampling rate divides 48000 evenly anyway, you should just run the encoder at that sampling rate directly. This will save processing time and give better quality.
I was concerned you might be assuming the sampling rate could change from chunk to chunk of the input, but it looks like that is already broken, as you don't ensure the input chunks end on a frame boundary. If you really want to handle this case, we should be leveraging the Web Audio APIs resampling infrastructure to get good quality resampling (with the associated speed hit).
@@ +330,5 @@
> +{
> + switch (mEncoderStateInternal) {
> + case ID_HEADER:
> + {
> + OpusIdHeader header(1, 0, kOpusSamplingRate, 0);
You should fill in the pre-skip using the OPUS_GET_LOOKAHEAD ctl instead of hard-coding it to 0 (see http://www.opus-codec.org/docs/html_api-1.0.2/group__opus__encoderctls.htmllook/), e.g.,
int lookahead;
int ret;
ret=opus_encoder_ctl(mEncoder, OPUS_GET_LOOKAHEAD(&lookahead));
if (ret != OPUS_OK) {
lookahead = 0;
}
You should also be padding the input by this many samples at the start of encoding, or the original data will be lost.
See setup_padder() (and the associated read_padder) from opus-tools for an example of a good way to do this: <https://git.xiph.org/?p=opus-tools.git;a=blob;f=src/audio-in.c;hb=HEAD#l868>. I think it's okay to do the padding in a follow-up bug.
@@ +339,5 @@
> + break;
> + }
> + case COMMENT_HEADER:
> + {
> + OpusCommentHeader commentHeader("Mozilla");
The vendor string is meant to contain the version of the actual encoder library (see http://tools.ietf.org/html/draft-ietf-codec-oggopus-00#section-5.2). You should pass in the value from opus_get_version_string().
@@ +340,5 @@
> + }
> + case COMMENT_HEADER:
> + {
> + OpusCommentHeader commentHeader("Mozilla");
> + commentHeader.AddComment("ENCODER=b2g media encoder");
If we're planning to expose this API on desktop, too, the ENCODER tag probably shouldn't be B2G-specific. It might also be a good idea to include MOZ_APP_UA_VERSION (from mozilla-config.h) here.
@@ +384,5 @@
> +
> + int frameNeeded = GetFrameSize() - mSourceSegment->GetDuration();
> +
> + // Check to see if we have enough raw data first.
> + if ((mRawSegment->GetDuration() < frameNeeded) && !mInputFinished) {
This should also be a while loop.
@@ +453,5 @@
> + // Appending null data to the pcm buffer if the leftover data is not enough
> + // for opus encoder.
> + if (frameCopied < GetFrameSize() && mInputFinished){
> + for (int i = frameCopied * mChannels; i < GetFrameSize() * mChannels; i++) {
> + pcm.AppendElement(0);
Ideally you would use LPC extension for this (see the same code I suggested for padding the pre-skip above). It's okay to do this in a follow-up bug, but you may want to leave a comment.
@@ +459,5 @@
> + }
> +
> + // TODO: Need to handle all possible rate supported by WebRTC.
> + // Re-sample the source data to 48K sampling rate, but only deals with 8K,
> + // 16K of source for now.
Left by itself this zero-order hold is going to sound terrible. At a minimum you should be telling the encoder what the audio bandwidth is as suggested above.
@@ +477,5 @@
> +#else
> + const float* buf = static_cast<float*>(pcm.Elements());
> + encodedLen = opus_encode_float(mEncoder, buf, kFrameSizeSamples, buffer, len);
> +#endif
> + rawDuration = kFrameSizeSamples;
This value is incorrect on the last frame. You're telling the decoder to play back the 0 samples you added for padding. Instead, you should report the actual duration of the input audio (scaled to a 48 kHz samplerate).
(In reply to Cervantes Yu from comment #70)
> To clarify, MediaEncoder is designed as a passive component and to be
> agnostic over threading issues, like ogg or opus libraries. For the methods
> that there might be race conditions we use a monitor to make it thread safe
> (e.g. between MediaStreamListener callback methods and other methods). Other
> than that, we don't specify what the on which thread the MediaEncoder should
> run. The user of the MediaEncoder should handle the threading issue.
>
> Because of the use of a monitor, the call to get encoded data from the
> MediaEncoder might block. The MediaEncoder should not run on a thread that
> forbids blocking, such as the main thread or the IO thread. The typical
> usage is to create a thread to do the heavy lifting and block to wait for
> input data and make the encoder only run on the thread.
Alright, this is very helpful to know. Please document this in the code.
Comment on attachment 741200 [details] [diff] [review]
v3: Media Encoder Framework
Please update the patch for the comments so far and then I'll do another review.
Attachment #741200 -
Flags: review?(roc)
Attachment #741200 -
Flags: feedback?(roc)
Assignee | ||
Comment 74•12 years ago
|
||
Hi :roc, mayjor change is the creation of MediaEncoder from EncoderTraits, and please find my comments from the code review. (code comments and documentation are not updated yet.)
Hi :derf, and thanks for the review too. It really points out my obscureness to Opus, I'll check on those in the next patch.
Attachment #740185 -
Attachment is obsolete: true
Attachment #741200 -
Attachment is obsolete: true
Attachment #740185 -
Flags: feedback?(cku)
Attachment #744448 -
Flags: feedback?(roc)
Assignee | ||
Comment 75•12 years ago
|
||
Comment on attachment 744448 [details] [diff] [review]
v4: Media Encoder Framework, fixes and cleanups per last version.
Review of attachment 744448 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/EncoderTraits.cpp
@@ +32,5 @@
> + "video/ogg",
> + "audio/ogg",
> + "application/ogg",
> + nullptr
> +};
This is a copy from DecoderTraits, I wonder I should just put the function CreateEncoder() in the DecoderTraits.
@@ +66,5 @@
> + if (!isAudio) {
> + // init the videoEncoder
> + }
> +#ifdef MOZ_OPUS
> + audioEncoder = new OpusEncoder();
We'll have a list of "supportedCodecs", both for use in determine the preference of codecs and the return value of |GetOptions()| from MediaRecorder.
::: content/media/encoder/MediaEncoder.h
@@ +97,5 @@
> + * | no data (error) | v
> + * |----------<----------------------|----->------> ENCODE_DONE --> return
> + * !eof eof
> + */
> + void GetEncodedData(unsigned char* buffer, int length, int &encodedLen);
TODO: Use nsTArray<unsigned char>*
Assignee | ||
Comment 76•12 years ago
|
||
Please merge the comment/documentation patch into the other patch --- makes it easier to review.
(In reply to Shelly Lin [:shelly] from comment #75)
> ::: content/media/encoder/EncoderTraits.cpp
> @@ +32,5 @@
> > + "video/ogg",
> > + "audio/ogg",
> > + "application/ogg",
> > + nullptr
> > +};
>
> This is a copy from DecoderTraits, I wonder I should just put the function
> CreateEncoder() in the DecoderTraits.
I don't think so. Leave it duplicated for now.
Assignee | ||
Comment 78•12 years ago
|
||
More fixes and cleanups, comments and documentation are updated too.
TODO for me:
- Use nsAString instead of nsACString for parsing MIMEType.
- Create interface to return list of supported codecs.
Attachment #744448 -
Attachment is obsolete: true
Attachment #744577 -
Attachment is obsolete: true
Attachment #744448 -
Flags: feedback?(roc)
Attachment #745037 -
Flags: feedback?(roc)
Assignee | ||
Comment 79•12 years ago
|
||
(In reply to Timothy B. Terriberry (:derf) from comment #71)
> Comment on attachment 741200 [details] [diff] [review]
> v3: Media Encoder Framework
>
> Review of attachment 741200 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> @@ +303,5 @@
> > + mRequiresResample = true;
> > +
> > + MOZ_ASSERT(samplingRate > 0);
> > + mRateMultiplier = kOpusSamplingRate / samplingRate;
> > + MOZ_ASSERT(mRateMultiplier > 0);
>
> You're making an assumption here that samplingRate evenly divides 48000, but
> not verifying it. At the very least you should add:
>
> MOZ_ASSERT(samplingRate * mRateMultiplier == kOpusSamplingRate);
>
Thanks for pointing this out.
> So long as you're already making this assumption, and assuming that the
> sample rate is at least 8 kHz, you should use the OPUS_SET_MAX_BANDWIDTH()
> ctl to tell the encoder what the maximum audio bandwidth of the input is
> (http://www.opus-codec.org/docs/html_api-1.0.2/group__opus__encoderctls.
> html#ga4f88288e89c595c07c61db316cc45289). Otherwise, the zero-order hold
> you're using for resampling will sound terrible.
>
I'm not sure I followed. Currently the audio source of Encoder comes from the getUserMedia of WebRTC, and the WebRTC supports "8K, 16K, 32K" sample rate of output *documently*. In fact, it is hardcoded and supports only 16K now. How can I get the bandwidth from audio segments? Or should I just set a constant value of max bandwidth for the encoder? e.g. OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_SUPERWIDEBAND).
> However, if you're going to assume the sampling rate divides 48000 evenly
> anyway, you should just run the encoder at that sampling rate directly. This
> will save processing time and give better quality.
>
Agree on just run the encoder at a sampling rate directly, however, the playback doesn't seem to support sample rate other then 48K?
> @@ +459,5 @@
> > + }
> > +
> > + // TODO: Need to handle all possible rate supported by WebRTC.
> > + // Re-sample the source data to 48K sampling rate, but only deals with 8K,
> > + // 16K of source for now.
>
> Left by itself this zero-order hold is going to sound terrible. At a minimum
> you should be telling the encoder what the audio bandwidth is as suggested
> above.
>
It is intended to be a temporary solution.
Assignee | ||
Updated•12 years ago
|
Flags: needinfo?(tterribe)
Assignee | ||
Comment 80•12 years ago
|
||
Create bug 868962 for implementation of Opus encoder.
Flags: needinfo?(tterribe)
Comment 81•12 years ago
|
||
(In reply to Shelly Lin [:shelly] from comment #79)
> I'm not sure I followed. Currently the audio source of Encoder comes from
> the getUserMedia of WebRTC, and the WebRTC supports "8K, 16K, 32K" sample
> rate of output *documently*. In fact, it is hardcoded and supports only 16K
> now. How can I get the bandwidth from audio segments? Or should I just set a
> constant value of max bandwidth for the encoder? e.g.
> OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_SUPERWIDEBAND).
OPUS_BANDWIDTH_SUPERWIDEBAND corresponds to a 24 kHz sampling rate, not 32 kHz (Opus only supports sampling rates that evenly divide 48 kHz). If we want to support 32 kHz input, we should resample to 48 kHz and use OPUS_BANDWIDTH_FULLBAND.
> Agree on just run the encoder at a sampling rate directly, however, the
> playback doesn't seem to support sample rate other then 48K?
After looking at the file you gave me on IRC, this is because the timestamps were invalid. They were incrementing at 16 kHz, but timestamps should always run at 48 kHz, regardless of the sampling rate the encoder API runs at. See <http://tools.ietf.org/html/draft-ietf-codec-oggopus-00#section-4>, second paragraph.
The reason playback didn't work is because the stream was rejected (correctly!) by the check described in <http://tools.ietf.org/html/draft-ietf-codec-oggopus-00#section-4.4>, second paragraph.
> > Left by itself this zero-order hold is going to sound terrible. At a minimum
> > you should be telling the encoder what the audio bandwidth is as suggested
> > above.
> >
> It is intended to be a temporary solution.
Yes, I'm just saying we can fix most of this problem just by correctly telling the encoder what the maximum audio bandwidth is, so we should do that.
I think the rest of your questions were answered on IRC. Let me know if you have any more.
No longer blocks: 868962
Assignee | ||
Comment 82•12 years ago
|
||
Fix some crashes and changed the type of buffer to "uint_8", and all the opus related implementation will be discussed in bug 868962.
Attachment #745037 -
Attachment is obsolete: true
Attachment #745037 -
Flags: feedback?(roc)
Attachment #745828 -
Flags: feedback?(roc)
Assignee | ||
Comment 83•12 years ago
|
||
1. Changed the argument type of aMIMEType from nsACString to nsAString.
2. Added a function to get the list of supported codecs.
Attachment #745828 -
Attachment is obsolete: true
Attachment #745828 -
Flags: feedback?(roc)
Attachment #746204 -
Flags: feedback?(roc)
Assignee | ||
Comment 84•12 years ago
|
||
- Fixed OpusEncoder and encode data at source sample rate.
- Added GetSupportedMIMETypes in EncoderTraits.
Attachment #746204 -
Attachment is obsolete: true
Attachment #746204 -
Flags: feedback?(roc)
Attachment #746843 -
Flags: feedback?(roc)
Assignee | ||
Comment 85•11 years ago
|
||
Latest updates and fixes from bug 868962.
Attachment #746843 -
Attachment is obsolete: true
Attachment #746843 -
Flags: feedback?(roc)
Attachment #747807 -
Flags: feedback?(roc)
Comment on attachment 747807 [details] [diff] [review]
v5: Media Encoder Framework
Review of attachment 747807 [details] [diff] [review]:
-----------------------------------------------------------------
The changes to the Segment classes should be split out into a separate patch.
::: content/media/encoder/MediaEncoder.h
@@ +53,5 @@
> + * 4) To stop encoding, remove this component from its source stream.
> + * > sourceStream->RemoveListener(encoder);
> + *
> + * 5) To pause encoding, call |encoder->Pause()|, note that this function does
> + * not stop MediaStreamGraph from calling callbacks of MediaStreamListener.
Please explain what pausing means in this case. Is stream data produced while paused just dropped from the encoding?
::: content/media/encoder/MediaWriter.cpp
@@ +5,5 @@
> +#include "MediaWriter.h"
> +
> +namespace mozilla {
> +
> +}
You don't seem to need this file.
::: content/media/encoder/MediaWriter.h
@@ +12,5 @@
> +/*
> + * MediaWriter writes out media data, with a given container type. It
> + * encapsulates encoded data into packets, and returns the result to its caller.
> + */
> +class MediaWriter {
Maybe we should call this ContainerWriter?
It's still not really clear how this class works, i.e., how someone should use it and exactly what it does.
If this is a class that wraps raw track data into a container, then I would expect to see a method that writes raw data for a track. Maybe WriteEncodedData is that method, but it doesn't say anything about tracks. I would also expect to see a way to return the final container data; Maybe GetPacket is that method, but the name doesn't sound like it.
@@ +29,5 @@
> +
> + /*
> + * If we need to force a flush.
> + */
> + virtual void Flush(bool aRequire) = 0;
Please document aRequire
Assignee | ||
Comment 87•11 years ago
|
||
Adapt the MediaSegment and AudioSegment for use by MediaEncoder.
Attachment #747807 -
Attachment is obsolete: true
Attachment #747807 -
Flags: feedback?(roc)
Attachment #749706 -
Flags: feedback?(roc)
Assignee | ||
Comment 88•11 years ago
|
||
Major changes in this patch:
- Rename MediaWriter to ContainerWriter, and drop the MediaWriter.cpp.
- In ContainerWriter
- The name of method WriteEncodedData remains unchanged, since it not only writes the encoded track data but also the header. I’ve attached more details about that in the comments.
- Rename GetPacket to GetContainerData. This is the method which populates the final container data into a buffer.
- Drop the argument of length of aBuffer in WriteEncodedData, since aBuffer->Length() is available.
- Drop the argument of capacity of aOutput in GetContainerData, since aOutput->Capacity() is available.
- In MediaEncoder
- Drop the argument of capacity of aOutput in GetEncodedData, since aOutput->Capacity() is available.
- Rename some internal states.
- In TrackEncoder
- Rename GetEncodedData to GetEncodedTrack.
- Drop the argument of capacity of aOutput in GetEncodedData, since aOutput->Capacity() is available.
Attachment #749712 -
Flags: feedback?(roc)
Attachment #749706 -
Flags: review+
Attachment #749706 -
Flags: feedback?(roc)
Attachment #749706 -
Flags: feedback+
Comment on attachment 749712 [details] [diff] [review]
v6: Media Encoder Framework
Review of attachment 749712 [details] [diff] [review]:
-----------------------------------------------------------------
I think we could speed this up by breaking up this patch a bit more:
-- A patch to add ContainerWriter.h
-- A patch to implement OggWriter.h/.cpp
-- A patch to add TrackEncoder and AudioTrackEncoder. We're not using VideoTrackEncoder yet so just take it out for now, we'll put it back later.
-- A patch to implement OpusEncoder.
-- A patch to implement MediaEncoder.
-- A patch to implement EncoderTraits.
Does that make sense?
::: content/media/encoder/ContainerWriter.h
@@ +21,5 @@
> +
> + /*
> + * Writes the encoded track data or header from aBuffer into a container.
> + * aDuration indicates the duration of this chunk of track, and aEos is
> + * true if this is the last track.
Do you mean "the last packet of the track" instead of "the last track"?
Does this interface support multiple tracks? If so, how?
What are the units of aDuration?
Does this method modify aBuffer? If not, why are we passing a pointer instead of a const reference?
@@ +30,5 @@
> + /*
> + * If aRequire is true, informs the writer to force a flush later in call of
> + * GetContainerData().
> + */
> + virtual void Flush(bool aRequire) = 0;
I still don't understand what aRequire means, or what Flush() does.
@@ +36,5 @@
> + /*
> + * Populates the final container data to aOutput buffer with bytes length
> + * aOutputLen.
> + */
> + virtual void GetContainerData(nsTArray<uint8_t>* aOutput, int &aOutputLen) = 0;
Seems like we don't need aOutputLen, since the caller can check aOutput->Length() instead?
::: content/media/encoder/MediaEncoder.h
@@ +76,5 @@
> + , mVideoEncoder(nullptr)
> + , mWriter(nullptr)
> + , mState(MediaEncoderState::INIT)
> + , mEncoderStateInternal(EncoderStateInternal::IDLE)
> + , mEncoderThread(nullptr)
No need to initialize nsCOMPtrs (and nsRefPtrs and nsAutoPtrs) to null explicitly.
@@ +102,5 @@
> + * Sets up writer, audio encoder, and video encoder. MediaEncoder changes its
> + * state to PREPARING here and sets the internal state to ENCODE_HEADER.
> + */
> + nsresult Init(ContainerWriter* aWriter, AudioTrackEncoder* aAudioEnc,
> + VideoTrackEncoder* aVideoEnc);
This lets us encode one audio track and one video track. I think it's OK to limit the number of tracks like this, for now, but please say in the comment that aAudioEnc is used to encode the first audio track only.
@@ +158,5 @@
> + * | no data (error) | v
> + * |----------<-----------------------|----->------> ENCODE_DONE --> return
> + * !eof eof
> + */
> + void GetEncodedData(nsTArray<uint8_t>* aOutput, int &aOutputLen);
I think the caller can use aOutput->GetLength() instead of reading aOutputLen, so we can get rid of the aOutputLen parameter.
@@ +192,5 @@
> + }
> +
> + AudioTrackEncoder* mAudioEncoder;
> + VideoTrackEncoder* mVideoEncoder;
> + ContainerWriter* mWriter;
Use nsAutoPtr for mAudioEncoder/mVideoEncoder/mWriter.
@@ +201,5 @@
> + ENCODE_TRACK,
> + GET_CONTAINER_DATA,
> + ENCODE_DONE,
> + } mEncoderStateInternal;
> + nsCOMPtr<nsIThread> mEncoderThread;
It looks like mEncoderThread is debug-only. If so, I think we can use DebugOnly<nsCOMPtr<nsIThread> > mEncoderThread.
@@ +225,5 @@
> + /*
> + * Creates and sets up header for a specific codec. Result data is returned
> + * in buffer aOutput with length aOutputLength.
> + */
> + virtual nsresult GetHeader(nsTArray<uint8_t>* aOutput, int &aOutputLen) = 0;
Again, can we remove aOutputLen?
When can this be called? Must this be called before GetEncodedTrack? Why do we need a separate method to get the header data, instead of having GetEncodedTrack return the header data first?
@@ +244,5 @@
> +
> + /*
> + * Forces the media writer to perform a flush if true.
> + */
> + virtual bool RequiresFlush() = 0;
Please be more specific about what this does.
@@ +279,5 @@
> +
> + /*
> + * Create audio encoder.
> + */
> + virtual nsresult CreateAudioEncoder(int aChannels, int aSamplingRate) = 0;
When should this method be called? Should it be called Init?
@@ +284,5 @@
> +
> + /*
> + * Returns the frame size in numbers of samples.
> + */
> + virtual int GetFrameSize() = 0;
Can you explain more about what this means?
@@ +305,5 @@
> +
> + /*
> + * A ReentrantMonitor to protect the pushing/pulling of mRawSegment.
> + */
> + ReentrantMonitor mReentrantMonitor;
Put it above mRawSegment.
Do we really need to share mRawSegment across threads? Maybe it would be simpler/safer to have the MediaStreamListener NotifyQueuedTrackChanges dispatch a Runnable to the encoder's thread with a reference to the data to encode?
Assignee | ||
Comment 90•11 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #89)
> Comment on attachment 749712 [details] [diff] [review]
> v6: Media Encoder Framework
>
> Review of attachment 749712 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: content/media/encoder/ContainerWriter.h
> @@ +21,5 @@
> > +
> > + /*
> > + * Writes the encoded track data or header from aBuffer into a container.
> > + * aDuration indicates the duration of this chunk of track, and aEos is
> > + * true if this is the last track.
>
> Do you mean "the last packet of the track" instead of "the last track"?
Yes.
>
> Does this interface support multiple tracks? If so, how?
So far no.
>
> What are the units of aDuration?
Sample numbers.
>
> Does this method modify aBuffer? If not, why are we passing a pointer
> instead of a const reference?
No, because we were using char* originally, so will change it to a const ref later.
>
> @@ +30,5 @@
> > + /*
> > + * If aRequire is true, informs the writer to force a flush later in call of
> > + * GetContainerData().
> > + */
> > + virtual void Flush(bool aRequire) = 0;
>
> I still don't understand what aRequire means, or what Flush() does.
"pageout" from opus doesn't really generate a page, it only populates a page when it has enough packets, but sometimes we need to force generating a page even there isn't enough packets, such as writing headers. Opus provides a "flush" call to force producing a page.
>
> @@ +36,5 @@
> > + /*
> > + * Populates the final container data to aOutput buffer with bytes length
> > + * aOutputLen.
> > + */
> > + virtual void GetContainerData(nsTArray<uint8_t>* aOutput, int &aOutputLen) = 0;
>
> Seems like we don't need aOutputLen, since the caller can check
> aOutput->Length() instead?
Yes, but then we need to use someother way to report errors since opus use "result length < 0" to indicate errors.
> When can this be called?
No, shall move it to private.
> Must this be called before GetEncodedTrack? Why do
> we need a separate method to get the header data, instead of having
> GetEncodedTrack return the header data first?
Yes, GetHeader must be called before GetEncodedTrack. It is originally designed to be a separate step, in case that there will be some other containers with different sequence of writing header/tracks.
>
>
> @@ +279,5 @@
> > +
> > + /*
> > + * Create audio encoder.
> > + */
> > + virtual nsresult CreateAudioEncoder(int aChannels, int aSamplingRate) = 0;
>
> When should this method be called? Should it be called Init?
It is called after getting the first valid track data from MediaStreamGraph, since we need to know the channel numbers and sampling ratre of source.
> @@ +305,5 @@
> > +
> > + /*
> > + * A ReentrantMonitor to protect the pushing/pulling of mRawSegment.
> > + */
> > + ReentrantMonitor mReentrantMonitor;
>
> Put it above mRawSegment.
>
> Do we really need to share mRawSegment across threads? Maybe it would be
> simpler/safer to have the MediaStreamListener NotifyQueuedTrackChanges
> dispatch a Runnable to the encoder's thread with a reference to the data to
> encode?
Sure, not a problem, but just want to make sure that we still need to wait in call to GetEncodedTrack if there isn't enough raw data, right?
(In reply to Shelly Lin [:shelly] from comment #90)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #89)
> > Does this interface support multiple tracks? If so, how?
> So far no.
OK, that should be documented.
> > What are the units of aDuration?
> Sample numbers.
OK, please document that.
> > Does this method modify aBuffer? If not, why are we passing a pointer
> > instead of a const reference?
> No, because we were using char* originally, so will change it to a const ref
> later.
OK.
> > @@ +30,5 @@
> > > + /*
> > > + * If aRequire is true, informs the writer to force a flush later in call of
> > > + * GetContainerData().
> > > + */
> > > + virtual void Flush(bool aRequire) = 0;
> >
> > I still don't understand what aRequire means, or what Flush() does.
> "pageout" from opus doesn't really generate a page, it only populates a page
> when it has enough packets, but sometimes we need to force generating a page
> even there isn't enough packets, such as writing headers. Opus provides a
> "flush" call to force producing a page.
OK. But why do we need two kinds of flushes()? When do we actually need to flush, except when we stop recording?
> > @@ +36,5 @@
> > > + /*
> > > + * Populates the final container data to aOutput buffer with bytes length
> > > + * aOutputLen.
> > > + */
> > > + virtual void GetContainerData(nsTArray<uint8_t>* aOutput, int &aOutputLen) = 0;
> >
> > Seems like we don't need aOutputLen, since the caller can check
> > aOutput->Length() instead?
> Yes, but then we need to use someother way to report errors since opus use
> "result length < 0" to indicate errors.
Return an nsresult then.
> > Must this be called before GetEncodedTrack? Why do
> > we need a separate method to get the header data, instead of having
> > GetEncodedTrack return the header data first?
> Yes, GetHeader must be called before GetEncodedTrack. It is originally
> designed to be a separate step, in case that there will be some other
> containers with different sequence of writing header/tracks.
Hmm, OK.
> > @@ +279,5 @@
> > > +
> > > + /*
> > > + * Create audio encoder.
> > > + */
> > > + virtual nsresult CreateAudioEncoder(int aChannels, int aSamplingRate) = 0;
> >
> > When should this method be called? Should it be called Init?
> It is called after getting the first valid track data from MediaStreamGraph,
> since we need to know the channel numbers and sampling ratre of source.
How about calling it SetAudioParameters instead.
> > @@ +305,5 @@
> > > +
> > > + /*
> > > + * A ReentrantMonitor to protect the pushing/pulling of mRawSegment.
> > > + */
> > > + ReentrantMonitor mReentrantMonitor;
> >
> > Put it above mRawSegment.
> >
> > Do we really need to share mRawSegment across threads? Maybe it would be
> > simpler/safer to have the MediaStreamListener NotifyQueuedTrackChanges
> > dispatch a Runnable to the encoder's thread with a reference to the data to
> > encode?
> Sure, not a problem, but just want to make sure that we still need to wait
> in call to GetEncodedTrack if there isn't enough raw data, right?
Hmm, you're right, that's a problem. I guess your current approach is better since it means we don't have to run the event loop in the encoder's thread. OK.
Assignee | ||
Comment 92•11 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #91)
> (In reply to Shelly Lin [:shelly] from comment #90)
> > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #89)
> > > @@ +30,5 @@
> > > > + /*
> > > > + * If aRequire is true, informs the writer to force a flush later in call of
> > > > + * GetContainerData().
> > > > + */
> > > > + virtual void Flush(bool aRequire) = 0;
> > >
> > > I still don't understand what aRequire means, or what Flush() does.
> > "pageout" from opus doesn't really generate a page, it only populates a page
> > when it has enough packets, but sometimes we need to force generating a page
> > even there isn't enough packets, such as writing headers. Opus provides a
> > "flush" call to force producing a page.
>
> OK. But why do we need two kinds of flushes()? When do we actually need to
> flush, except when we stop recording?
Do you mean the Flush() in ContainerWriter (virtual void Flush(bool aRequire) = 0;) and the RequiresFlush() in AudioTrackEncoder (virtual bool RequiresFlush() = 0;)?
ContainerWriter does not control when to flush, it only flushes at told, and AudioTrackEncoder tells ContainerWriter whether it needs a flush by
| writer->Flush( audio_encoder->RequiresFlush() ) |
later on, the ContainerWriter will actually flush out a page in the call of GetContainerData().
Hmm, I think this is really confusing, maybe we should name ContainerWriter::SetFlush() instead of ContainerWriter::Flush(), or pass in an parameter of aNeedFlush to GetContainerData(), but then this way sorts of adding constraints on the arguments.
Assignee | ||
Comment 93•11 years ago
|
||
According to Opus, we need to flush after writing the Id Header, Comment Header, and the last packet.
Assignee | ||
Comment 94•11 years ago
|
||
Attachment #749712 -
Attachment is obsolete: true
Attachment #749712 -
Flags: feedback?(roc)
Attachment #750979 -
Flags: feedback?(roc)
Assignee | ||
Comment 95•11 years ago
|
||
Attachment #750981 -
Flags: feedback?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #750979 -
Attachment description: Part1: Add ContainerWriter.h/.cpp → Part1: Add ContainerWriter.h
Assignee | ||
Comment 96•11 years ago
|
||
Assignee | ||
Comment 97•11 years ago
|
||
Assignee | ||
Comment 98•11 years ago
|
||
Assignee | ||
Comment 99•11 years ago
|
||
Assignee | ||
Comment 100•11 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #89)
> Comment on attachment 749712 [details] [diff] [review]
> v6: Media Encoder Framework
>
> @@ +201,5 @@
> > + ENCODE_TRACK,
> > + GET_CONTAINER_DATA,
> > + ENCODE_DONE,
> > + } mEncoderStateInternal;
> > + nsCOMPtr<nsIThread> mEncoderThread;
>
> It looks like mEncoderThread is debug-only. If so, I think we can use
> DebugOnly<nsCOMPtr<nsIThread> > mEncoderThread.
>
I got a compiling error of using !mEncoderThread this way, I guess it's because the type coercion of DebugOnly is not well defined. It works if I use DebugOnly<nsIThread*>, but then I need to init its value at constructor, clear its value at destructor, making this debug parameter showing up everywhere in the code. So I'd like to remove this checking.
> @@ +279,5 @@
> > +
> > + /*
> > + * Create audio encoder.
> > + */
> > + virtual nsresult CreateAudioEncoder(int aChannels, int aSamplingRate) = 0;
>
> When should this method be called? Should it be called Init?
>
At a second thought, I think Init is better than SetAudioParameters because it does initialize the encoder, plus the OggWriter uses Init as well.
Assignee | ||
Comment 101•11 years ago
|
||
Removed the Flush, and instead, pass a parameter to indicate whether it needs a flush in GetContainerData.
Attachment #750979 -
Attachment is obsolete: true
Attachment #750979 -
Flags: feedback?(roc)
Attachment #752590 -
Flags: feedback?(roc)
Assignee | ||
Comment 102•11 years ago
|
||
Removed the Flush, and updated documentation.
Attachment #750981 -
Attachment is obsolete: true
Attachment #750981 -
Flags: feedback?(roc)
Attachment #752592 -
Flags: feedback?(roc)
Assignee | ||
Comment 103•11 years ago
|
||
Seperatre TrackEncoder and AudioTrackEncoder from MediaEncoder.h/.cpp to its own files. Remove the previous method, RequiresFlush.
Attachment #750982 -
Attachment is obsolete: true
Attachment #752594 -
Flags: feedback?(roc)
Assignee | ||
Comment 104•11 years ago
|
||
Revised per comment 10 of bug 868962.
Attachment #750983 -
Attachment is obsolete: true
Attachment #752653 -
Flags: feedback?(roc)
Assignee | ||
Comment 105•11 years ago
|
||
Updated the documentation.
Attachment #750984 -
Attachment is obsolete: true
Attachment #752659 -
Flags: feedback?(roc)
Comment on attachment 752590 [details] [diff] [review]
Part1: Add ContainerWriter.h
Review of attachment 752590 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +25,5 @@
> + * aEos is true if this is the last packet of track.
> + * Currently, WriteEncodedData doesn't support multiple tracks.
> + */
> + virtual nsresult WriteEncodedData(const nsTArray<uint8_t>& aBuffer,
> + int aDuration, bool aEos) = 0;
Boolean parameters aren't very good. It's hard to read code like "WriteEncodedData(buffer, duration, true)". Better to use a flags word so we can write "WriteEncodedData(buffer, duration, ContainerWriter::END_OF_STREAM)",
@@ +28,5 @@
> + virtual nsresult WriteEncodedData(const nsTArray<uint8_t>& aBuffer,
> + int aDuration, bool aEos) = 0;
> +
> + /*
> + * Populates the final container data to aOutput buffer if it has accumulated
Copies the final container data to aOUtput if it has accumulated
@@ +30,5 @@
> +
> + /*
> + * Populates the final container data to aOutput buffer if it has accumulated
> + * enough packets from WriteEncodedData. However, aNeedFlush is true will
> + * force generate a page.
Need a flags word here too. And we need to say more about what it means to flush or "generate a page".
@@ +36,5 @@
> + virtual nsresult GetContainerData(nsTArray<uint8_t>* aOutput,
> + bool aNeedFlush) = 0;
> +
> +protected:
> + virtual void ProcessEndOfStream() = 0;
Need to document this so that subclasses know how to override it.
@@ +38,5 @@
> +
> +protected:
> + virtual void ProcessEndOfStream() = 0;
> +
> + nsTArray<char> mPacket;
nsTArray<uint8_t>
Assignee | ||
Comment 107•11 years ago
|
||
Update comments of GetEncodedData.
Attachment #752659 -
Attachment is obsolete: true
Attachment #752659 -
Flags: feedback?(roc)
Attachment #753151 -
Flags: feedback?(roc)
Assignee | ||
Comment 108•11 years ago
|
||
This part is mostly unchanged.
Attachment #750986 -
Attachment is obsolete: true
Attachment #753154 -
Flags: feedback?(roc)
Comment on attachment 752590 [details] [diff] [review]
Part1: Add ContainerWriter.h
Review of attachment 752590 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +8,5 @@
> +
> +#include "nsTArray.h"
> +
> +namespace mozilla {
> +/*
Use /**
@@ +18,5 @@
> + : mInitialized(false)
> + {}
> + virtual ~ContainerWriter() {}
> +
> + /*
Use /**
Comment on attachment 752592 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 752592 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/ogg/OggWriter.cpp
@@ +67,5 @@
> +
> + if (mPacket.b_o_s) {
> + mPacket.b_o_s = 0;
> + }
> + mPacket.packetno++;
Either make mPacket a local variable, or set mPacket.packet to null so that there's no dangling pointer.
@@ +78,5 @@
> +{
> + // Set eos flag to true, and once the eos is written to a packet, there must
> + // not be anymore pages after a page has marked as eos.
> + LOG("[Ogg] Set e_o_s flag to true.");
> + mPacket.e_o_s = 1;
Just put this in WriteEncodedData
@@ +84,5 @@
> +
> +nsresult
> +OggWriter::GetContainerData(nsTArray<uint8_t>* aOutput, bool aNeedFlush)
> +{
> + // TODO what if capacity of aOutput is smaller than page header + page body?
aOutput can be made as big as you want. Nothing to worry about.
@@ +106,5 @@
> + MOZ_ASSERT(aOutput->Capacity() >= mOggPage.header_len + mOggPage.body_len);
> + memcpy(aOutput->Elements(), mOggPage.header, mOggPage.header_len);
> + memcpy(aOutput->Elements() + mOggPage.header_len, mOggPage.body,
> + mOggPage.body_len);
> + aOutput->SetLength(mOggPage.header_len + mOggPage.body_len);
SetLength first, then copy the data in. No need to check the capacity. Also no need to SetLength(0) above.
::: content/media/ogg/OggWriter.h
@@ +10,5 @@
> +#include <ogg/ogg.h>
> +
> +namespace mozilla {
> +/*
> + * WriteEncodedData inserts raw packets into Ogg stream(ogg_stream_state), and
Space before (
@@ +21,5 @@
> +{
> +public:
> + OggWriter();
> +
> + ~OggWriter() {}
Don't need this.
Comment on attachment 752594 [details] [diff] [review]
Part3: Add TrackEncoder and AudioEncoder
Review of attachment 752594 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/TrackEncoder.h
@@ +11,5 @@
> +#include "AudioSegment.h"
> +#include "StreamBuffer.h"
> +
> +namespace mozilla {
> +/*
/** comment (for all the methods below too)
@@ +12,5 @@
> +#include "StreamBuffer.h"
> +
> +namespace mozilla {
> +/*
> + * Base class of AudioTrackEncoder and VideoTrackEncoder.
"Lifetimes managed by MediaEncoder. Most methods can only be called on the MediaEncoder's thread, but some subclass methods can be called on other threads when noted."
@@ +51,5 @@
> + /*
> + * Returns its current state of TrackEncoder.
> + */
> + TrackEncoderState GetCurrentState() {
> + return mState;
Let's move states down to the subclass. Instead of this method we can have
bool IsInitialized()
and
bool IsEncodingComplete()
@@ +70,5 @@
> + , mRawSegment(new AudioSegment())
> + , mInputFinished(false)
> + {}
> +
> + virtual ~AudioTrackEncoder() {}
Don't needed.
@@ +73,5 @@
> +
> + virtual ~AudioTrackEncoder() {}
> +
> + /*
> + * Appends and consumes a segment coming from MediaStreamGraph.
"Called on the MediaStreamGraph thread"
@@ +85,5 @@
> +
> + /*
> + * Returns the packet duration in numbers of samples.
> + */
> + virtual int GetPacketDuration() = 0;
Make this protected.
Say "mReentrantMonitor will be notified when at least this much data has been added to mRawSegment."
@@ +104,5 @@
> + */
> + ReentrantMonitor mReentrantMonitor;
> +
> + /*
> + * A segment queue of audio track.
Add "protected by mReentrantMonitor"
Comment on attachment 752590 [details] [diff] [review]
Part1: Add ContainerWriter.h
Review of attachment 752590 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +25,5 @@
> + * aEos is true if this is the last packet of track.
> + * Currently, WriteEncodedData doesn't support multiple tracks.
> + */
> + virtual nsresult WriteEncodedData(const nsTArray<uint8_t>& aBuffer,
> + int aDuration, bool aEos) = 0;
WriteEncodedData should return a void* allocated by moz_malloc.
@@ +33,5 @@
> + * enough packets from WriteEncodedData. However, aNeedFlush is true will
> + * force generate a page.
> + */
> + virtual nsresult GetContainerData(nsTArray<uint8_t>* aOutput,
> + bool aNeedFlush) = 0;
nsTArray<nsAutoArrayPtr<uint8_t> >* aOutput
Assignee | ||
Comment 113•11 years ago
|
||
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #112)
> Comment on attachment 752590 [details] [diff] [review]
> Part1: Add ContainerWriter.h
>
> Review of attachment 752590 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: content/media/encoder/ContainerWriter.h
> @@ +25,5 @@
> > + * aEos is true if this is the last packet of track.
> > + * Currently, WriteEncodedData doesn't support multiple tracks.
> > + */
> > + virtual nsresult WriteEncodedData(const nsTArray<uint8_t>& aBuffer,
> > + int aDuration, bool aEos) = 0;
>
> WriteEncodedData should return a void* allocated by moz_malloc.
>
> @@ +33,5 @@
> > + * enough packets from WriteEncodedData. However, aNeedFlush is true will
> > + * force generate a page.
> > + */
> > + virtual nsresult GetContainerData(nsTArray<uint8_t>* aOutput,
> > + bool aNeedFlush) = 0;
>
> nsTArray<nsAutoArrayPtr<uint8_t> >* aOutput
I assume the point of passing a pointer of array with type of array pointers is maybe we would have multiple tracks in the future? But still it returns one page at every calls. So right now, in the case of single track, aOutput always contains one array pointer, which points to an uint8_t array.
I think what Randy was asking is "who manages those array pointers? in other words, who keeps them and decides when to release them?" My answer would be who really uses the MediaEncoder, and it is MediaRecorder in our case.
Randy also suggest that the MediaEncoder can return a void*, instead of returning a pointer of array of pointers, making it easier to understand now and still has the room to change later.
Comment 114•11 years ago
|
||
I'm worry for the R/W at the same time. May I know how the video do for dealing the frame pointer? By a monitor?
Assignee | ||
Comment 115•11 years ago
|
||
Comment on attachment 752653 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 752653 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusEncoder.cpp
@@ +465,5 @@
> + aOutput->Elements(), aOutput->Capacity());
> +#else
> + const float* pcmBuf = static_cast<float*>(pcm.Elements());
> + result = opus_encode_float(mEncoder, pcmBuf, GetPacketDuration(),
> + aOutput->Elements(), aOutput->Capacity());
SetLength to 4K first, and set the length back to the actual result length.
Comment on attachment 752594 [details] [diff] [review]
Part3: Add TrackEncoder and AudioEncoder
Review of attachment 752594 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/TrackEncoder.h
@@ +12,5 @@
> +#include "StreamBuffer.h"
> +
> +namespace mozilla {
> +/*
> + * Base class of AudioTrackEncoder and VideoTrackEncoder.
Add a comment here explaining how the data flow works. Something like:
AppendAudioSegment/AppendVideoSegment is called on subclasses of this class from the MediaStreamGraph thread, the media data is stored in the TrackEncoder, and then later GetEncodedTrack is called to encode and retrieve the encoded data.
Comment on attachment 752653 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 752653 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusEncoder.cpp
@@ +31,5 @@
> +
> +namespace {
> +
> +template<typename T>
> +class OpusHeaderInt {
Add a comment explaining what this is, and what T is.
Might be simpler to use a function in OpusEncoder, something like
template <typename T> void SerializeToBuffer(nsTArray<uint8_t>*, T) { ... }
@@ +55,5 @@
> + {
> + return sizeof(T);
> + }
> +
> + void SerializeToBuffer(uint8_t *aBuff, int aBuffLen, int &aOutLen) const
Just take an nsTArray<uint8_t>* aBuffer. Then you can just use aBuffer->AppendElement(...) to write a byte to the buffer.
@@ +73,5 @@
> +private:
> + T mValue;
> +};
> +
> +class OpusIdHeader {
Add a comment explaining what this is
@@ +77,5 @@
> +class OpusIdHeader {
> +public:
> + OpusIdHeader(uint8_t aChannelCount, uint16_t aPreskip,
> + uint32_t aInputSampleRate, int16_t aOutputGain)
> + : mVersion(1)
Could we just have another function
void SerializeOpusIdHeader(nsTArray<uint8_t>*, ...) { ... }
So hopefully we can remove all these classes and use functions in OpusEncoder instead.
@@ +92,5 @@
> +
> + void SerializeToBuffer(uint8_t *aBuff, int aBuffLen, int &aOutLen) const
> + {
> + uint8_t *p = aBuff;
> + memcpy(p, mMagic, sizeof(mMagic));
Here you can write
static const char magic[8] = "OpusHead";
memcpy(aBuffer->AppendElements(sizeof(magic)), magic, sizeof(magic));
@@ +114,5 @@
> + aOutLen = p - aBuff;
> + }
> +
> +private:
> + uint8_t mMagic[8]; // "OpusHead".
Why even store this here?
@@ +268,5 @@
> + // that is, the source sampling rate must divide 48000 evenly.
> + MOZ_ASSERT((aSamplingRate >= 8000) && (kOpusSamplingRate / aSamplingRate) *
> + aSamplingRate == kOpusSamplingRate,
> + "The source sample rate should be greater than 8000 and divides "
> + "48000 evenly.");
In general we'll need to be able to resample here. For now, let's not do that, but make this into a conditional test instead of an assert and fail if the rate isn't possible.
@@ +321,5 @@
> + }
> + case COMMENT_HEADER:
> + {
> + nsString vender;
> + vender.AppendASCII(opus_get_version_string());
"vendor"
@@ +399,5 @@
> +
> + // Start encoding data.
> + mState = ENCODING;
> +
> + nsTArray<AudioDataValue> pcm;
Use nsAutoTArray<AudioDataValue, 9600> or something like that.
@@ +400,5 @@
> + // Start encoding data.
> + mState = ENCODING;
> +
> + nsTArray<AudioDataValue> pcm;
> + pcm.SetCapacity(GetPacketDuration() * mChannels);
Don't use SetCapacity
@@ +417,5 @@
> +
> + if (!chunk.IsNull()) {
> + MOZ_ASSERT(chunk.mChannelData.Length() <= MAX_CHANNELS);
> +
> + // Append the interleaved data to the end of pcm buffer.
Before you do this, use methods in AudioChannelFormat to upmix/downmix the data to the correct number of channels.
::: content/media/encoder/OpusEncoder.h
@@ +12,5 @@
> +struct OpusEncoder;
> +
> +namespace mozilla {
> +
> +class OpusEncoder : public AudioTrackEncoder
OpusTrackEncoder
@@ +34,5 @@
> + ID_HEADER,
> + COMMENT_HEADER,
> + DATA
> + } mEncoderStateInternal;
> + ::OpusEncoder* mEncoder;
Then you won't need this ::
@@ +36,5 @@
> + DATA
> + } mEncoderStateInternal;
> + ::OpusEncoder* mEncoder;
> +
> + /*
/** comment
@@ +40,5 @@
> + /*
> + * A local segment queue which stores the raw segments.
> + */
> + nsAutoPtr<AudioSegment> mSourceSegment;
> + int mLookahead;
Add a comment to explain mLookahead
Comment on attachment 753151 [details] [diff] [review]
Part5: Implement MediaEncoder.h/.cpp
Review of attachment 753151 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.cpp
@@ +18,5 @@
> +MediaEncoder::~MediaEncoder()
> +{
> + if (mWriter) {
> + delete mWriter;
> + mWriter = nullptr;
These are nsAutoPtrs, so you don't need to do this.
@@ +46,5 @@
> +
> + // This stream has stopped and reached the end of track.
> + if (aTrackEvents == MediaStreamListener::TRACK_EVENT_ENDED) {
> + LOG("TRACK_EVENT_ENDED received in [MediaEncoder].");
> + mAudioEncoder->NotifyEndOfStream();
Move all this code in NotifyQueuedTrackChanges to AudioTrackEncoder. So give AudioTrackEncoder a NotifyQueuedTrackChanges method and just forward the call to that method.
@@ +80,5 @@
> +MediaEncoder::NotifyRemoved(MediaStreamGraph* aGraph)
> +{
> + // In case that MediaEncoder does not receive a TRACK_EVENT_ENDED event.
> + LOG("NotifyRemoved in [MediaEncoder].");
> + mAudioEncoder->NotifyEndOfStream();
Forward this to a NotifyRemoved method on mAudioEncoder
@@ +106,5 @@
> +MediaEncoder::GetEncodedData(nsTArray<uint8_t>* aOutput)
> +{
> + MOZ_ASSERT(!NS_IsMainThread());
> +
> + aOutput = new nsAutoTArray<uint8_t, TRACK_BUFFER_LEN>();
BTW, never allocate nsAutoTArray on the heap.
@@ +112,5 @@
> + bool reloop = true;
> + while (reloop) {
> + switch (mEncoderStateInternal) {
> + case ENCODE_HEADER: {
> + mAudioEncoder->GetHeader(aOutput);
You should probably document in TrackEncoder::GetHeader that returning a zero-length buffer means there's no more headers and it's time to change state, and that if a buffer of length > 0 is returned, then GetHeader will be called again.
@@ +113,5 @@
> + while (reloop) {
> + switch (mEncoderStateInternal) {
> + case ENCODE_HEADER: {
> + mAudioEncoder->GetHeader(aOutput);
> + if (aOutput->Length()) {
IsEmpty() is a bit easier to read
@@ +155,5 @@
> + reloop = false;
> + break;
> + }
> +
> + rv = mWriter->GetContainerData(aOutput, eos ? true : false);
don't write eos ? true : false". You could usually just write "eos".
::: content/media/encoder/MediaEncoder.h
@@ +14,5 @@
> +namespace mozilla {
> +
> +class VideoTrackEncoder;
> +
> +/*
/**
@@ +64,5 @@
> + ENCODING, // Encoder is encoding track data.
> + PAUSED, // Encoder is paused, and stop appending the raw segment
> + // from MediaStream.
> + SHUTDOWN // Encoder has done encoding or some error has occurred.
> + };
Don't have this. Just add boolean methods to check whether we're paused or shut down.
@@ +98,5 @@
> +
> + /*
> + * Encodes the raw track data and returns the final container data. Assuming
> + * it is called on a single worker thread. GetEncodedData allocates the buffer
> + * aOutput, fill it up with a page (the final container data), returns the
We shouldn't refer to "page" here because that's an Ogg thing.
I don't think we need to say anything about what boundary constraints there are for data.
@@ +127,5 @@
> + * Else
> + * Loop back to ENCODE_TRACK
> + *
> + * If internalState is ENCODE_DONE
> + * Stop the loop
Move this comment to the implementation code.
@@ +130,5 @@
> + * If internalState is ENCODE_DONE
> + * Stop the loop
> + *
> + */
> + void GetEncodedData(nsTArray<uint8_t>* aOutput);
nsTArray<nsAutoArrayPtr<uint8_t> >* aOutput
@@ +141,5 @@
> + ChangeState(MediaEncoder::PAUSED);
> + }
> +
> + /*
> + * Resume and start appending raw segments to the queue of encoder.
As we discussed, we can remove the pausing/resuming from this code and have the MediaRecorder use an internal MediaStream to receive the data, which it can pause and resume itself.
@@ +171,5 @@
> + IDLE,
> + ENCODE_HEADER,
> + ENCODE_TRACK,
> + ENCODE_DONE,
> + } mEncoderStateInternal;
You can call this mState now that MediaEncoderState is removed.
Comment on attachment 753154 [details] [diff] [review]
Part6: Add EncoderTraits.h/.cpp
Review of attachment 753154 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/EncoderTraits.cpp
@@ +66,5 @@
> + nsAutoPtr<AudioTrackEncoder> audioEncoder;
> + nsAutoPtr<VideoTrackEncoder> videoEncoder;
> +
> + nsString type(aMIMEType);
> + bool isAudio = (type.Find("audio") == 0) ? true : false;
bool isAudio = type.Find("audio") == 0;
::: content/media/encoder/EncoderTraits.h
@@ +16,5 @@
> +class MediaEncoder;
> +
> +class EncoderTraits {
> +public:
> + /*
/** comment
@@ +25,5 @@
> +
> + /*
> + * Returns the list of supported MIMETypes.
> + */
> + static void GetSupportedMIMETypes(nsTArray<nsString>* aOutput);
Remove this stuff
::: content/media/encoder/Makefile.in
@@ +14,5 @@
> FAIL_ON_WARNINGS := 1
>
>
> CPPSRCS += \
> + EncoderTraits.cpp \
fix indent
Assignee | ||
Comment 120•11 years ago
|
||
Fixes that are not covered in the last review:
1. Remove the protected method |virtual void ProcessEndOfStream() = 0;|
Since all it does is setting a flag from OggPacket.
2. Remove the protected parameter |nsTArray<char> mPacket;|
I don't think it is in used.
Attachment #752590 -
Attachment is obsolete: true
Attachment #752590 -
Flags: feedback?(roc)
Attachment #754644 -
Flags: feedback?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #752592 -
Flags: feedback?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #752594 -
Flags: feedback?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #752653 -
Flags: feedback?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #753151 -
Flags: feedback?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #753154 -
Flags: feedback?(roc)
Assignee | ||
Comment 121•11 years ago
|
||
Hi Ralph, many thanks for the help on Opus encoder last week!
Could you give me some ideas about how the serialno should be when initializing the ogg stream?
Attachment #752592 -
Attachment is obsolete: true
Attachment #754649 -
Flags: feedback?(roc)
Flags: needinfo?(giles)
Assignee | ||
Comment 122•11 years ago
|
||
Rename the "WriteEncodedData" to "WriteEncodedTrack", be more clarified on what this method is doing.
Attachment #754644 -
Attachment is obsolete: true
Attachment #754644 -
Flags: feedback?(roc)
Attachment #754658 -
Flags: feedback?(roc)
Assignee | ||
Comment 123•11 years ago
|
||
Same as the last change, renaming "WriteEncodedData" to "WriteEncodedTrack".
Attachment #754649 -
Attachment is obsolete: true
Attachment #754649 -
Flags: feedback?(roc)
Attachment #754660 -
Flags: feedback?(roc)
Comment on attachment 754658 [details] [diff] [review]
Part1: Add ContainerWriter.h/.cpp
Review of attachment 754658 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +38,5 @@
> + /**
> + * Copies the final container data to aOutput if it has accumulated enough
> + * packets from WriteEncodedTrack. If aFlags is true with FLUSH_NEEDED,
> + * container data will be copied to aOutput even the number of accumulated
> + * packets are not enough.
Make it more clear that individual buffers of data should be appended to aOutput; existing elements of aOutput should not be modified.
@@ +41,5 @@
> + * container data will be copied to aOutput even the number of accumulated
> + * packets are not enough.
> + */
> + virtual nsresult GetContainerData(nsTArray<nsAutoArrayPtr<uint8_t> >* aOutput,
> + uint32_t aFlags = 0) = 0;
Use a separate enum with separate flags here. So both flag values can be 1 << 0.
Attachment #754658 -
Flags: feedback?(roc) → feedback+
Comment on attachment 754660 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 754660 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/ogg/OggWriter.cpp
@@ +97,5 @@
> + nsAutoArrayPtr<uint8_t> buffer(new uint8_t[mOggPage.header_len +
> + mOggPage.body_len]);
> + memcpy(buffer.get(), mOggPage.header, mOggPage.header_len);
> + memcpy(buffer.get() + mOggPage.header_len, mOggPage.body, mOggPage.body_len);
> + aOutput->AppendElement(buffer);
buffer.forget()
Attachment #754660 -
Flags: feedback?(roc) → feedback+
Comment on attachment 754658 [details] [diff] [review]
Part1: Add ContainerWriter.h/.cpp
Review of attachment 754658 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +38,5 @@
> + /**
> + * Copies the final container data to aOutput if it has accumulated enough
> + * packets from WriteEncodedTrack. If aFlags is true with FLUSH_NEEDED,
> + * container data will be copied to aOutput even the number of accumulated
> + * packets are not enough.
I think the comment for FLUSH_NEEDED should specifically say that for Ogg containers, this will force a page to be flushed even if it's not full.
Assignee | ||
Comment 127•11 years ago
|
||
Separate the enum of FLUSH_NEEDED, and add more comments about the flush operation.
Attachment #754658 -
Attachment is obsolete: true
Attachment #754679 -
Flags: feedback?(roc)
Assignee | ||
Comment 128•11 years ago
|
||
Fix |aOutput->AppendElement(buffer.forget());|
Thanks for reminding :)
Attachment #754660 -
Attachment is obsolete: true
Attachment #754681 -
Flags: feedback?(roc)
Assignee | ||
Comment 129•11 years ago
|
||
Please find my comments from the code review.
Attachment #752594 -
Attachment is obsolete: true
Attachment #754689 -
Flags: feedback?(roc)
Assignee | ||
Comment 130•11 years ago
|
||
Comment on attachment 754689 [details] [diff] [review]
Part3: Add TrackEncoder and AudioEncoder
Review of attachment 754689 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/TrackEncoder.cpp
@@ +47,5 @@
> + // The stream has stopped and reached the end of track.
> + if (aTrackEvents == MediaStreamListener::TRACK_EVENT_ENDED) {
> + LOG("TRACK_EVENT_ENDED received in [AudioTrackEncoder].");
> + NotifyEndOfStream();
> + }
NotifyEndOfStream sets mEndOfStream to true, and wakes up mReentrantMonitor.
I think it make more senses to move this section "after" appending audio segment, otherwise the encoder will be waked up by NotifyEndOfStream() but with leftover data in mRawSegment.
@@ +75,5 @@
> + mRawSegment->AppendAndConsumeChunk(&chunk);
> + }
> + iter.Next();
> + }
> + if (mRawSegment->GetDuration() >= GetPacketDuration()) {
So remove the check of "|| mEndOfStream" here.
Assignee | ||
Comment 131•11 years ago
|
||
Update the comments of Init and AppendAudioSegment.
Attachment #754689 -
Attachment is obsolete: true
Attachment #754689 -
Flags: feedback?(roc)
Attachment #754709 -
Flags: feedback?(roc)
Assignee | ||
Comment 132•11 years ago
|
||
Moved the method |InterleaveAndConvertBuffer| out of the scope of AudioSegment (and it wasn't belongs to AudioSegment either), looks more reasonable since other object (OpusTrackEncoder) is using this method.
The major change in this patch is I've moved the down-mixing part in |WirteTo| to a new function named |DownmixAndInterleave|. I'm not exactly sure about the down-mixing in encoder, but looks like most of the part in |WriteTo| can be shared. This new function will also be called by |OpusTrackEncoder|.
Attachment #749706 -
Attachment is obsolete: true
Attachment #755256 -
Flags: feedback?(roc)
Assignee | ||
Comment 133•11 years ago
|
||
Create a new function |InterleaveTrackData|, which interleaves the track data and returns the result back. Also, might do the up-mixing or down-mixing if necessary. I think it could be shared by other audio encoders, so move it up to the scope of AudioTrackEncoder.
Attachment #754709 -
Attachment is obsolete: true
Attachment #754709 -
Flags: feedback?(roc)
Attachment #755259 -
Flags: feedback?(roc)
Assignee | ||
Comment 134•11 years ago
|
||
Remove the classes of Opus header, and use functions instead. Other comments are in code review.
Attachment #752653 -
Attachment is obsolete: true
Attachment #755262 -
Flags: feedback?(roc)
Attachment #755262 -
Flags: feedback?(cyu)
Assignee | ||
Comment 135•11 years ago
|
||
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 755262 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +141,5 @@
> + // that is, the source sampling rate must divide 48000 evenly.
> + if (!((aSamplingRate >= 8000) && (kOpusSamplingRate / aSamplingRate) *
> + aSamplingRate == kOpusSamplingRate)) {
> + LOG("[Opus] WARNING! The source sample rate should be greater than 8000 and"
> + " divides 48000 evenly.");
New change: Remove the assertion of this conditional check.
@@ +251,5 @@
> + }
> +
> + // Starts encoding data.
> + nsAutoTArray<AudioDataValue, 9600> pcm;
> + pcm.SetLength(GetPacketDuration() * mChannels);
New change: Declare pcm as a nsAutoTArray, with a confortable default size.
@@ +299,5 @@
> + }
> + }
> +
> + // Encode the data with Opus Encoder.
> + aOutput->SetLength(MAX_DATA_BYTES);
New Change: Set aOutput to a conforatble max size, and its length back to actual value later.
Assignee | ||
Updated•11 years ago
|
Flags: needinfo?(giles)
Comment on attachment 754679 [details] [diff] [review]
Part1: Add ContainerWriter.h/.cpp
Review of attachment 754679 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +25,5 @@
> + };
> +
> + enum {
> + FLUSH_NEEDED = 1 << 0
> + };
Move this down to be just above GetContainerData, which uses it.
Attachment #754679 -
Flags: feedback?(roc) → feedback+
Attachment #754681 -
Flags: feedback?(roc) → feedback+
Assignee | ||
Comment 137•11 years ago
|
||
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 755262 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +47,5 @@
> +{
> + // Format of serializing a string to buffer is, the length of string (32 bits,
> + // little endian), and the string.
> + SerializeToBuffer(aComment.Length(), aOutput);
> + aOutput->AppendElements(aComment.get(), aComment.Length());
Would it be better to use |PromiseFlatString(aComment.get())|?
Comment 138•11 years ago
|
||
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
>+
>+template<typename T>
>+static void
>+SerializeToBuffer(T aValue, nsTArray<uint8_t>* aOutput)
>+{
>+ for (uint32_t i = 0; i < sizeof(T); i++) {
>+ aOutput->AppendElement((uint8_t)(0x000000ff & (aValue >> (i * 8))));
>+ }
>+}
>+
Please make it clear that this is for endian-neutral serialization of integer types used in the opus headers (T is integer types)
Comment on attachment 755256 [details] [diff] [review]
Part0: Modify MediaSegment and AudioSegment
Review of attachment 755256 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/AudioSegment.cpp
@@ +14,5 @@
> static void
> InterleaveAndConvertBuffer(const SrcT** aSourceChannels,
> + int32_t aLength, float aVolume,
> + int32_t aChannels,
> + DestT* aOutput)
This part of the patch just seems to break indentation. Don't do that. (same below)
@@ +107,5 @@
> + const int16_t* sourceBuf = static_cast<const int16_t*>(aChannelData->ElementAt(i));
> + for (uint32_t j = 0; j < aDuration; ++j) {
> + conversionBuf[j] = AudioSampleToFloat(sourceBuf[j]);
> + }
> + aChannelData->ElementAt(i) = conversionBuf;
It would be better to make aChannelData a const nsTArray<const void*>&. Then instead of modifying aChannelData here, just set up a new nsAutoTArray<const void*> with the elements you want. When aSourceFormat == AUDIO_FORMAT_FLOAT32, copy aChannelData into that new array.
::: content/media/AudioSegment.h
@@ +31,5 @@
> + int32_t aChannels,
> + AudioDataValue* aOutput);
> +/**
> + * Down-mix audio channels, and interleave the channel data. Interleaved result
> + * data is stored into aOutput.
Document how many elements aOutput needs.
@@ +33,5 @@
> +/**
> + * Down-mix audio channels, and interleave the channel data. Interleaved result
> + * data is stored into aOutput.
> + */
> +void DownmixAndInterleave(nsTArray<const void*>* aChannelData,
Document that aChannelData is consumed and may be garbage after this call.
Actually, it would be better to make aChannelData a const nsTArray<const void*>& and make sure that DownmixAndInterleave makes a copy of it if necessary. See comments above.
@@ +35,5 @@
> + * data is stored into aOutput.
> + */
> +void DownmixAndInterleave(nsTArray<const void*>* aChannelData,
> + AudioSampleFormat aSourceFormat,
> + int32_t aDuration, float aVolume, int32_t aChannels,
Call aChannels aOutputChannels to be more clear
Comment on attachment 755259 [details] [diff] [review]
Part3: Add TrackEncoder and AudioEncoder
Review of attachment 755259 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/TrackEncoder.h
@@ +151,5 @@
> + nsAutoPtr<AudioSegment> mRawSegment;
> +
> + /**
> + * True if we have received an event of TRACK_EVENT_ENDED from MediaStreamGraph,
> + * or the MediaEncoder is removed from its source stream.
Comment that it's protected by mReentrantMonitor.
@@ +156,5 @@
> + */
> + bool mEndOfStream;
> +
> + bool mInitialized;
> + bool mDoneEncoding;
Put these below mSamplingRate since they aren't protected by the lock. For all fields not protected by the lock, comment that they're accessed only on the media encoder thread.
Attachment #755259 -
Flags: feedback?(roc) → feedback+
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 755262 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +46,5 @@
> +SerializeToBuffer(const nsCString& aComment, nsTArray<uint8_t>* aOutput)
> +{
> + // Format of serializing a string to buffer is, the length of string (32 bits,
> + // little endian), and the string.
> + SerializeToBuffer(aComment.Length(), aOutput);
Make this more clear and safer by explicitly casting aComment.Length() to uint32_t.
@@ +47,5 @@
> +{
> + // Format of serializing a string to buffer is, the length of string (32 bits,
> + // little endian), and the string.
> + SerializeToBuffer(aComment.Length(), aOutput);
> + aOutput->AppendElements(aComment.get(), aComment.Length());
This is fine.
@@ +56,5 @@
> +SerializeOpusIdHeader(uint8_t aChannelCount, uint16_t aPreskip,
> + uint32_t aInputSampleRate, nsTArray<uint8_t>* aOutput)
> +{
> + // The magic signature, null terminator has to be stripped off from strings.
> + static const uint8_t magic[9] = "OpusHead";
magic[8]. We don't need the null byte here.
@@ +74,5 @@
> + SerializeToBuffer(aInputSampleRate, aOutput);
> +
> + // Output gain, an encoder should set this field to zero (16 bits, signed,
> + // little endian).
> + SerializeToBuffer(0, aOutput);
I have no idea which overload will get used here. Explicitly cast 0 to int16_t.
@@ +87,5 @@
> + const nsTArray<nsCString>& aComments,
> + nsTArray<uint8_t>* aOutput)
> +{
> + // The magic signature, null terminator has to be stripped off.
> + static const uint8_t magic[9] = "OpusTags";
magic[8]
@@ +96,5 @@
> + SerializeToBuffer(aVendor, aOutput);
> +
> + // Add comments; In order of comment list length, comment #0 string length,
> + // comment #0 string, comment #1 string length, comment #1 string...
> + SerializeToBuffer(aComments.Length(), aOutput);
The length is supposed to be 32 bits right? Explicitly cast to uint32_t so that's clear.
@@ +123,5 @@
> +
> +nsresult
> +OpusTrackEncoder::Init(int aChannels, int aSamplingRate)
> +{
> + // The track must have at least 1 or 2 channels.
remove "at least". It must be 1 or 2 channels.
@@ +141,5 @@
> + // that is, the source sampling rate must divide 48000 evenly.
> + if (!((aSamplingRate >= 8000) && (kOpusSamplingRate / aSamplingRate) *
> + aSamplingRate == kOpusSamplingRate)) {
> + LOG("[Opus] WARNING! The source sample rate should be greater than 8000 and"
> + " divides 48000 evenly.");
I think we should return NS_ERROR_FAILURE here.
@@ +168,5 @@
> +{
> + {
> + ReentrantMonitorAutoEnter mon(mReentrantMonitor);
> + while (!mEncoder) {
> + mReentrantMonitor.Wait();
This is a bit scary. What happens if we decide to shut down the browser, or the page, while the encoder thread is in this loop? Seems like we could be stuck here forever.
I think we need a mechanism for cancelling encoding. That signal probably needs to come from MediaEncoder through ContainerWriter to each track encoder. So we need a Cancel method on TrackEncoder that can set a 'cancelled' flag and notify mReentrantMonitor?
@@ +232,5 @@
> + }
> +
> + int frameCopied = 0;
> + AudioSegment::ChunkIterator iter(*mRawSegment);
> + while(mSourceSegment->GetDuration() < GetPacketDuration() && !iter.IsEnded()) {
Space after 'while'
I'm not sure why we're pulling data from mRawSegment until mSourceSegment has at least GetPacketDuration() data in it. Why not just pull everything from mRawSegment into mSourceSegment here?
@@ +254,5 @@
> + nsAutoTArray<AudioDataValue, 9600> pcm;
> + pcm.SetLength(GetPacketDuration() * mChannels);
> + AudioSegment::ChunkIterator iter(*mSourceSegment);
> + int frameCopied = 0;
> + while(!iter.IsEnded() && frameCopied < GetPacketDuration()) {
space after 'while'
@@ +292,5 @@
> + }
> +
> + // Append null data to pcm buffer if the leftover data is not enough for
> + // opus encoder.
> + if (frameCopied < GetPacketDuration() && mEndOfStream){
space before {
::: content/media/encoder/OpusTrackEncoder.h
@@ +42,5 @@
> +
> + /**
> + * A local segment queue which stores the raw segments.
> + */
> + nsAutoPtr<AudioSegment> mSourceSegment;
Can't this just be a local variable to GetEncodedTrack?
Assignee | ||
Comment 142•11 years ago
|
||
Comment on attachment 755256 [details] [diff] [review]
Part0: Modify MediaSegment and AudioSegment
Review of attachment 755256 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/AudioSegment.h
@@ +31,5 @@
> + int32_t aChannels,
> + AudioDataValue* aOutput);
> +/**
> + * Down-mix audio channels, and interleave the channel data. Interleaved result
> + * data is stored into aOutput.
Like how many samples will be copied to aOutput?
Assignee | ||
Comment 143•11 years ago
|
||
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 755262 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +56,5 @@
> +SerializeOpusIdHeader(uint8_t aChannelCount, uint16_t aPreskip,
> + uint32_t aInputSampleRate, nsTArray<uint8_t>* aOutput)
> +{
> + // The magic signature, null terminator has to be stripped off from strings.
> + static const uint8_t magic[9] = "OpusHead";
The compiler was complaining about magic[8] = "OpusHead";, saying the size of array is not big enough. Or should it be like the following?
char magic[8];
strcpy(magic, "OpusHead");
@@ +74,5 @@
> + SerializeToBuffer(aInputSampleRate, aOutput);
> +
> + // Output gain, an encoder should set this field to zero (16 bits, signed,
> + // little endian).
> + SerializeToBuffer(0, aOutput);
Thanks for pointing this out!
@@ +168,5 @@
> +{
> + {
> + ReentrantMonitorAutoEnter mon(mReentrantMonitor);
> + while (!mEncoder) {
> + mReentrantMonitor.Wait();
Sounds good to me.
@@ +232,5 @@
> + }
> +
> + int frameCopied = 0;
> + AudioSegment::ChunkIterator iter(*mRawSegment);
> + while(mSourceSegment->GetDuration() < GetPacketDuration() && !iter.IsEnded()) {
Hmm, I don't know why either!
And pulling everything at once reduces the cost of locking resource.
::: content/media/encoder/OpusTrackEncoder.h
@@ +42,5 @@
> +
> + /**
> + * A local segment queue which stores the raw segments.
> + */
> + nsAutoPtr<AudioSegment> mSourceSegment;
Yes, it should be.
Comment 144•11 years ago
|
||
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 755262 [details] [diff] [review]:
-----------------------------------------------------------------
Hi shelly,
For this patch,
where is the OpusEncoder.h?
require for content/media/encoder/EncoderTraits.cpp:14:25: error: OpusEncoder.h
Assignee | ||
Comment 145•11 years ago
|
||
Comment on attachment 755262 [details] [diff] [review]
Part4: Add OpusEncoder.h/.cpp
Review of attachment 755262 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.h
@@ +42,5 @@
> +
> + /**
> + * A local segment queue which stores the raw segments.
> + */
> + nsAutoPtr<AudioSegment> mSourceSegment;
Actually, it cann't be a local variable. Opus encoder only takes GetPacketDuration() samples from mSourceSegment in every encoding cycle, thus mSourceSegment needs to hold the track data.
Assignee | ||
Comment 146•11 years ago
|
||
Attachment #755256 -
Attachment is obsolete: true
Attachment #755256 -
Flags: feedback?(roc)
Attachment #756347 -
Flags: review?(roc)
Assignee | ||
Comment 147•11 years ago
|
||
Attachment #754679 -
Attachment is obsolete: true
Attachment #756348 -
Flags: review?(roc)
Assignee | ||
Comment 148•11 years ago
|
||
Attachment #755259 -
Attachment is obsolete: true
Attachment #756350 -
Flags: review?(roc)
Assignee | ||
Updated•11 years ago
|
Attachment #755262 -
Flags: feedback?(roc)
Attachment #755262 -
Flags: feedback?(cyu)
Assignee | ||
Updated•11 years ago
|
Attachment #756350 -
Flags: review?(roc)
Assignee | ||
Comment 149•11 years ago
|
||
Add |NotifyCancel()| to wake up the mReentrantMonitor in anycase.
Attachment #756350 -
Attachment is obsolete: true
Attachment #756364 -
Flags: review?(roc)
Assignee | ||
Comment 150•11 years ago
|
||
Move over some comments to the code review of this patch.
Attachment #755262 -
Attachment is obsolete: true
Attachment #756408 -
Flags: review?(roc)
Assignee | ||
Comment 151•11 years ago
|
||
Comment on attachment 756408 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 756408 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +57,5 @@
> +SerializeOpusIdHeader(uint8_t aChannelCount, uint16_t aPreskip,
> + uint32_t aInputSampleRate, nsTArray<uint8_t>* aOutput)
> +{
> + // The magic signature, null terminator has to be stripped off from strings.
> + static const uint8_t magic[9] = "OpusHead";
The compiler was complaining about magic[8] = "OpusHead";, saying the size of array is not big enough. Or should it be:
char magic[8];
strcpy(magic, "OpusHead");
@@ +176,5 @@
> + }
> +
> + if (mCancelled) {
> + return NS_ERROR_FAILURE;
> + }
If mCancelled is set to true from MediaEncoder, it will break the wait of mReetrantMonitor and return an error code here.
@@ +237,5 @@
> + }
> +
> + if (mCancelled) {
> + return NS_ERROR_FAILURE;
> + }
If mCancelled is set to true from MediaEncoder, it will break the wait of mReetrantMonitor and return an error code here.
Attachment #756347 -
Flags: review?(roc) → review+
Attachment #756348 -
Flags: review?(roc) → review+
Attachment #756364 -
Flags: review?(roc) → review+
Comment on attachment 756408 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 756408 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +57,5 @@
> +SerializeOpusIdHeader(uint8_t aChannelCount, uint16_t aPreskip,
> + uint32_t aInputSampleRate, nsTArray<uint8_t>* aOutput)
> +{
> + // The magic signature, null terminator has to be stripped off from strings.
> + static const uint8_t magic[9] = "OpusHead";
This is fine.
@@ +249,5 @@
> + frameCopied += chunk.GetDuration();
> + }
> + iter.Next();
> + }
> + mRawSegment->RemoveLeading(frameCopied);
I believe this can all be replaced with mSourceSegment->AppendFrom(mRawSegment).
@@ +253,5 @@
> + mRawSegment->RemoveLeading(frameCopied);
> +
> + // Pad |mLookahead| samples to the end of source stream to prevent lost of
> + // original data, the pcm duration will be calculated at rate 48K later.
> + if (mRawSegment->GetDuration() == 0 && mEndOfStream) {
mRawSegment->GetDuration() will always return 0 now.
Assignee | ||
Updated•11 years ago
|
Attachment #756348 -
Attachment description: Part1: Add ContainerWriter.h/.cpp → Part1: Add ContainerWriter.h
Assignee | ||
Comment 153•11 years ago
|
||
Remove the CPP_SOURCES from Makefile.in, and move it to moz.build.
Attachment #754681 -
Attachment is obsolete: true
Attachment #757317 -
Flags: review?(roc)
Assignee | ||
Comment 154•11 years ago
|
||
Remove the CPP_SOURCES from Makefile.in, and move it to moz.build.
Attachment #756364 -
Attachment is obsolete: true
Attachment #757318 -
Flags: review?(roc)
Assignee | ||
Comment 155•11 years ago
|
||
Attachment #756408 -
Attachment is obsolete: true
Attachment #756408 -
Flags: review?(roc)
Attachment #757319 -
Flags: review?(roc)
Assignee | ||
Comment 156•11 years ago
|
||
Comment on attachment 757319 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 757319 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +239,5 @@
> + if (mCancelled) {
> + return NS_ERROR_FAILURE;
> + }
> +
> + mSourceSegment->AppendFrom(mRawSegment);
Remove the previous implementation of iterating every chunks in mRawSegment.
@@ +243,5 @@
> + mSourceSegment->AppendFrom(mRawSegment);
> +
> + // Pad |mLookahead| samples to the end of source stream to prevent lost of
> + // original data, the pcm duration will be calculated at rate 48K later.
> + if (mEndOfStream) {
Remove the previous check of |mRawSegment->Duration() == 0|, since mEndOfStream is for sure to be set after appending any segment in |AudioTrackEncoder::NotifyQueuedTrackChanges|.
::: content/media/encoder/moz.build
@@ +17,2 @@
> 'TrackEncoder.cpp',
> ]
Move the CPP_SOURCES from Makefile.in to moz.build.
Assignee | ||
Comment 157•11 years ago
|
||
Attachment #753151 -
Attachment is obsolete: true
Attachment #753154 -
Attachment is obsolete: true
Attachment #757324 -
Flags: review?(roc)
Attachment #757324 -
Flags: feedback?(roc)
Attachment #757317 -
Flags: review?(roc) → review+
Attachment #757318 -
Flags: review?(roc) → review+
Assignee | ||
Comment 158•11 years ago
|
||
Comment on attachment 757324 [details] [diff] [review]
Part5: Implement MediaEncoder
Review of attachment 757324 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.cpp
@@ +89,5 @@
> + nsRefPtr<MediaEncoder> encoder;
> +
> + if (aMIMEType.IsEmpty()) {
> + // TODO: Should pick out a default container+codec base on the track
> + // coming from MediaStreamGraph.
My thought of approaching this design is yet using another reentrant monitor to delay the initialization of encoders, since we don't know which ContainerWriter and which TrackEncoder to use until we receive the first valid track from MediaStreamGraph, but I am concern of using too many monitors.
Other question is that, how should I determine the finishing of initialization? If I receive the video segment from MediaStreamGraph first, then I can for sure we are encoding a video element, but what if I receive an audio segment first, how can it be telled whether we are encoding an audio element or a video element?
Comment on attachment 757319 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 757319 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +226,5 @@
> + // the monitor in this block.
> + ReentrantMonitorAutoEnter mon(mReentrantMonitor);
> +
> + while (!mEncoder && !mCancelled) {
> + mReentrantMonitor.Wait();
Why do we need this loop? Can we just remove it?
(In reply to Shelly Lin [:shelly] from comment #158)
> My thought of approaching this design is yet using another reentrant monitor
> to delay the initialization of encoders, since we don't know which
> ContainerWriter and which TrackEncoder to use until we receive the first
> valid track from MediaStreamGraph, but I am concern of using too many
> monitors.
Just leave it for now
> Other question is that, how should I determine the finishing of
> initialization? If I receive the video segment from MediaStreamGraph first,
> then I can for sure we are encoding a video element, but what if I receive
> an audio segment first, how can it be telled whether we are encoding an
> audio element or a video element?
Good question. Just ignore this for now. Soon we'll refactor some MediaStreamGraph stuff so the new tracks arrive at the same time and this will be easier to handle.
Comment on attachment 757324 [details] [diff] [review]
Part5: Implement MediaEncoder
Review of attachment 757324 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.cpp
@@ +89,5 @@
> + nsRefPtr<MediaEncoder> encoder;
> +
> + if (aMIMEType.IsEmpty()) {
> + // TODO: Should pick out a default container+codec base on the track
> + // coming from MediaStreamGraph.
For now, just use Ogg+Opus.
@@ +149,5 @@
> +{
> + MOZ_ASSERT(!NS_IsMainThread());
> +
> + aMIMEType = mMIMEType;
> + aOutput = new nsTArray<nsAutoArrayPtr<uint8_t>>;
Don't do this. The caller will pass in a pointer to its own nsTArray.
Notice that assigning to aOutput here won't actually return anything to the caller, since aOutput is not a reference.
::: content/media/encoder/MediaEncoder.h
@@ +132,5 @@
> + nsAutoPtr<AudioTrackEncoder> mAudioEncoder;
> + nsAutoPtr<VideoTrackEncoder> mVideoEncoder;
> + int mState;
> + bool mShutdown;
> + nsString mMIMEType;
Move this up above mState so that fields with pointers in them precede other fields. Makes data pack better.
Attachment #757324 -
Flags: feedback?(roc) → feedback+
Assignee | ||
Comment 162•11 years ago
|
||
Many thanks for the previous review and feedback :), please find my comment from the code review.
Attachment #757840 -
Flags: feedback?(roc)
Assignee | ||
Comment 163•11 years ago
|
||
Comment on attachment 757840 [details] [diff] [review]
Solution of getting the length of data buffer
Review of attachment 757840 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +47,5 @@
> + * even it is not full, and then copy these container data to a buffer for
> + * aOutput to append.
> + */
> + virtual nsresult GetContainerData(nsTArray<nsAutoArrayPtr<uint8_t>>* aOutputBufs,
> + nsTArray<uint32_t>* aOutputBufsLen,
After testing with MediaRecorder in real life, I found out that there's no way to fetch the array length with nsAutoArrayPtr (true?). Because the size of data buffer is needed for creating a DOM file, we might need an additional parameter, aOutputBufsLen, from |GetContainerData()| to pass out the buffer length.
Assignee | ||
Comment 164•11 years ago
|
||
Comment on attachment 757319 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 757319 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +226,5 @@
> + // the monitor in this block.
> + ReentrantMonitorAutoEnter mon(mReentrantMonitor);
> +
> + while (!mEncoder && !mCancelled) {
> + mReentrantMonitor.Wait();
In order to ensure mEncoder is initialized. Although in the current implementation, GetEncodedTrack will always be called after GetHeader(), making mEncoder always valid here. But what if we change that grantee some time later?
@@ +230,5 @@
> + mReentrantMonitor.Wait();
> + }
> +
> + // Wait when not enough raw data and it's not the end of stream yet.
> + while ((mRawSegment->GetDuration() + mSourceSegment->GetDuration() <
Or write
while (!mEncoder && (mRawSegment->GetDuration() + mSourceSegment->GetDuration() <
Comment on attachment 757840 [details] [diff] [review]
Solution of getting the length of data buffer
Review of attachment 757840 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/ContainerWriter.h
@@ +47,5 @@
> + * even it is not full, and then copy these container data to a buffer for
> + * aOutput to append.
> + */
> + virtual nsresult GetContainerData(nsTArray<nsAutoArrayPtr<uint8_t>>* aOutputBufs,
> + nsTArray<uint32_t>* aOutputBufsLen,
Good point!
Let's make it an nsTArray<nsTArray<uint8_t> > then.
(In reply to Shelly Lin [:shelly] from comment #164)
> Or write
> while (!mEncoder && (mRawSegment->GetDuration() +
> mSourceSegment->GetDuration() <
You mean ||. but that sounds good.
Assignee | ||
Comment 167•11 years ago
|
||
Change to
| virtual nsresult GetContainerData(nsTArray<nsTArray<uint8_t> >* aOutputBufs, ...|
Has already tested and verified with MediaRecorder.
Attachment #756348 -
Attachment is obsolete: true
Attachment #757840 -
Attachment is obsolete: true
Attachment #757840 -
Flags: feedback?(roc)
Attachment #758415 -
Flags: review?(roc)
Assignee | ||
Comment 168•11 years ago
|
||
Comment in code review.
Attachment #757317 -
Attachment is obsolete: true
Attachment #758416 -
Flags: review?(roc)
Assignee | ||
Comment 169•11 years ago
|
||
Comment on attachment 758416 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 758416 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/ogg/OggWriter.cpp
@@ +98,5 @@
> + buffer.SetLength(mOggPage.header_len + mOggPage.body_len);
> + memcpy(buffer.Elements(), mOggPage.header, mOggPage.header_len);
> + memcpy(buffer.Elements() + mOggPage.header_len, mOggPage.body,
> + mOggPage.body_len);
> + aOutputBufs->AppendElement(buffer);
Appending the nsTArray to aOutputBufs, instead of appending nsAutoArrayPtr.
Assignee | ||
Comment 170•11 years ago
|
||
Rename mCancelled to mCanceled.
Attachment #757318 -
Attachment is obsolete: true
Attachment #758418 -
Flags: review?(roc)
Assignee | ||
Comment 171•11 years ago
|
||
Polish some comments, and in method | GetEncodedTrack() |
- Remove the previous monitor wait of:
while (!mEncoder && !mCanceled)
- Merge the check of mEncoder into:
while (!mEncoder || ((mRawSegment->GetDuration() +
mSourceSegment->GetDuration() < GetPacketDuration()) &&
!mEndOfStream && !mCanceled)) {
mReentrantMonitor.Wait();
}
Attachment #757319 -
Attachment is obsolete: true
Attachment #757319 -
Flags: review?(roc)
Attachment #758421 -
Flags: review?(roc)
Assignee | ||
Comment 172•11 years ago
|
||
- Default the return mime-type to "audio/ogg" and use Ogg+Opus if the pass in mime-type is empty from caller.
- Change the parameter of GetEncodedData to |nsTArray<nsTArray<uint8_t> >* aOutputBufs|.
Attachment #757324 -
Attachment is obsolete: true
Attachment #757324 -
Flags: review?(roc)
Attachment #758424 -
Flags: review?(roc)
Comment 173•11 years ago
|
||
Since this issue is intrinsically implement the backbone of media encoding pipeline, I create another META bug to track Media Encoder status, and give a clearer summary for this bug
Alias: MediaEncoder
Summary: [Meta] Implement MediaEncoder framework → Implement MediaEncoder framework encoding pipeline
Blocks: MediaEncoder
Attachment #758415 -
Flags: review?(roc) → review+
Attachment #758416 -
Flags: review?(roc) → review+
Comment on attachment 758418 [details] [diff] [review]
Part3: Add TrackEncoder.h/.cpp and AudioTrackEncoder
Review of attachment 758418 [details] [diff] [review]:
-----------------------------------------------------------------
For the record, either spelling is correct.
Attachment #758418 -
Flags: review?(roc) → review+
Comment on attachment 758421 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 758421 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +235,5 @@
> + ReentrantMonitorAutoEnter mon(mReentrantMonitor);
> +
> + while (!mEncoder && !mCanceled) {
> + mReentrantMonitor.Wait();
> + }
I thought you were going to get rid of this?
@@ +241,5 @@
> + // Wait if mEncoder is not initialized, or when not enough raw data, but is
> + // not the end of stream nor is being canceled.
> + while (!mEncoder || ((mRawSegment->GetDuration() +
> + mSourceSegment->GetDuration() < GetPacketDuration()) &&
> + !mEndOfStream && !mCanceled)) {
It looks like this will wait if mEncoder is null and mCanceled is true. That seems wrong. Put "!mCanceled && (!mEncoder || ...)"
Comment on attachment 758424 [details] [diff] [review]
Part5: Implement MediaEncoder
Review of attachment 758424 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.h
@@ +88,5 @@
> + virtual void NotifyRemoved(MediaStreamGraph* aGraph);
> +
> + /**
> + * Creates an encoder with a given MIME type. Returns null if we are unable
> + * to create the encoder.
Document that if aMIMEType is the empty string, MediaEncoder chooses the format automatically.
Attachment #758424 -
Flags: review?(roc) → review+
Assignee | ||
Comment 177•11 years ago
|
||
Attachment #758421 -
Attachment is obsolete: true
Attachment #758421 -
Flags: review?(roc)
Attachment #758932 -
Flags: review?(roc)
Attachment #758932 -
Flags: review?(roc) → review+
Assignee | ||
Comment 178•11 years ago
|
||
Comment on attachment 758932 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 758932 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +237,5 @@
> + // Wait if mEncoder is not initialized, or when not enough raw data, but is
> + // not the end of stream nor is being canceled.
> + while (!mCanceled && (!mEncoder || (mRawSegment->GetDuration() +
> + mSourceSegment->GetDuration() < GetPacketDuration() &&
> + !mEndOfStream))) {
(In reply to your previous comments)
Yes, I thought I did remove the unwanted part, my bad :(
And I also rewrite the while loop of wait in GetHeader to |while (!mCanceld && !mEncoder)| to make them look consistent.
Attachment #758932 -
Flags: review+ → review?(roc)
Attachment #758932 -
Flags: review?(roc) → review+
Assignee | ||
Comment 179•11 years ago
|
||
Attachment #758415 -
Attachment is obsolete: true
Attachment #759558 -
Flags: review?(roc)
Assignee | ||
Comment 180•11 years ago
|
||
Attachment #758416 -
Attachment is obsolete: true
Attachment #759559 -
Flags: review?(roc)
Assignee | ||
Comment 181•11 years ago
|
||
Comment on attachment 759559 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 759559 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/ogg/OggWriter.cpp
@@ +98,5 @@
> + buffer->SetLength(mOggPage.header_len + mOggPage.body_len);
> + memcpy(buffer->Elements(), mOggPage.header, mOggPage.header_len);
> + memcpy(buffer->Elements() + mOggPage.header_len, mOggPage.body,
> + mOggPage.body_len);
> + aOutputBufs->AppendElement(buffer);
Randy and I came up with this approach of allocating and passing data buffer, in order to avoid redundant buffer copies.
Whish uses |nsTArray<nsTArray<uint8_t>* >* aOutputBufs|, allocates the data buffer here, and appends its pointer to aOutputBufs directly.
Assignee | ||
Comment 182•11 years ago
|
||
Attachment #758424 -
Attachment is obsolete: true
Attachment #759560 -
Flags: review?(roc)
Assignee | ||
Comment 183•11 years ago
|
||
Add comments about defaulting aMIMEType to "audio/ogg" if it is empty.
Change the return type of parameter to nsTArray<nsTArray<uint8_t>* >* of aOutputBufs.
Attachment #759560 -
Attachment is obsolete: true
Attachment #759560 -
Flags: review?(roc)
Attachment #759561 -
Flags: review?(roc)
Comment on attachment 759559 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 759559 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/ogg/OggWriter.cpp
@@ +98,5 @@
> + buffer->SetLength(mOggPage.header_len + mOggPage.body_len);
> + memcpy(buffer->Elements(), mOggPage.header, mOggPage.header_len);
> + memcpy(buffer->Elements() + mOggPage.header_len, mOggPage.body,
> + mOggPage.body_len);
> + aOutputBufs->AppendElement(buffer);
This is unnecessarily complicated. We don't need this level of indirection. nsTArray<nsTArray<uint8_t> > should be fine.
Assignee | ||
Comment 185•11 years ago
|
||
Attachment #759558 -
Attachment is obsolete: true
Attachment #759558 -
Flags: review?(roc)
Attachment #759628 -
Flags: review?(roc)
Assignee | ||
Comment 186•11 years ago
|
||
Attachment #759559 -
Attachment is obsolete: true
Attachment #759559 -
Flags: review?(roc)
Attachment #759630 -
Flags: review?(roc)
Assignee | ||
Comment 187•11 years ago
|
||
Comment on attachment 759630 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 759630 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/ogg/OggWriter.cpp
@@ +99,5 @@
> + if (rc) {
> + aOutputBufs->AppendElement();
> + aOutputBufs->LastElement().SetLength(mOggPage.header_len +
> + mOggPage.body_len);
> + memcpy(aOutputBufs->LastElement().Elements(), mOggPage.header,
A much simpler implementation and also avoids redundant buffer copy.
Assignee | ||
Comment 188•11 years ago
|
||
Comment on attachment 759630 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Hi Timothy, this is the patch of implementing Ogg container writer, I think it is very close to final, could you review it? Thanks a lot :)
Attachment #759630 -
Flags: review?(tterribe)
Assignee | ||
Comment 189•11 years ago
|
||
Comment on attachment 758932 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
A much final patch of Opus track encoder, and has addressed all issues discussed in bug 868962. Roc has reviewed the code structure part, could you review the Opus encoding part?
Attachment #758932 -
Flags: review?(tterribe)
Attachment #759628 -
Flags: review?(roc) → review+
Attachment #759630 -
Flags: review?(roc) → review+
Comment on attachment 759561 [details] [diff] [review]
Part5: Implement MediaEncoder
Review of attachment 759561 [details] [diff] [review]:
-----------------------------------------------------------------
::: content/media/encoder/MediaEncoder.h
@@ +100,5 @@
> + * allocated in ContainerWriter::GetContainerData(), and is appended to
> + * aOutputBufs. aMIMEType is the valid mime-type of this returned container
> + * data.
> + */
> + void GetEncodedData(nsTArray<nsTArray<uint8_t>* >* aOutputBufs,
Should be nsTArray<nsTArray<uint8_t> >*
Attachment #759561 -
Flags: review?(roc) → review+
Assignee | ||
Comment 191•11 years ago
|
||
This should be the most updated patch, carry r+ from Roc.
Attachment #759561 -
Attachment is obsolete: true
Attachment #760259 -
Flags: review+
Comment 192•11 years ago
|
||
Comment on attachment 759630 [details] [diff] [review]
Part2: Implement OggWriter.h/.cpp
Review of attachment 759630 [details] [diff] [review]:
-----------------------------------------------------------------
r=me
::: content/media/ogg/OggWriter.cpp
@@ +28,5 @@
> + // The serial number (serialno) should be a random number, for the current
> + // implementation where the output file contains only a single stream, this
> + // serialno is used to differentiate between files.
> + srand(static_cast<unsigned>(PR_Now()));
> + int rc = ogg_stream_init(&mOggStreamState, rand());
RAND_MAX is only 32767 on win32, which gives us quite a bit less than the full 32-bit space of available values here. But feel free to leave that for a follow-up bug.
Attachment #759630 -
Flags: review?(tterribe) → review+
Comment 193•11 years ago
|
||
Comment on attachment 758932 [details] [diff] [review]
Part4: Add OpusTrackEncoder.h/.cpp
Review of attachment 758932 [details] [diff] [review]:
-----------------------------------------------------------------
r=me
::: content/media/encoder/OpusTrackEncoder.cpp
@@ +2,5 @@
> +/* This Source Code Form is subject to the terms of the Mozilla Public
> + * License, v. 2.0. If a copy of the MPL was not distributed with this file,
> + * You can obtain one at http://mozilla.org/MPL/2.0/. */
> +#include "OpusTrackEncoder.h"
> +#include "nsAString.h"
Don't forget to update this to nsString.h.
Attachment #758932 -
Flags: review?(tterribe) → review+
Assignee | ||
Comment 194•11 years ago
|
||
Carry r+ from :roc.
Nit: Clean up the error(warning at local) of "comparison beterrn signed and unsigned integer" on try-server.
Attachment #756347 -
Attachment is obsolete: true
Attachment #760812 -
Flags: review+
Assignee | ||
Comment 195•11 years ago
|
||
(In reply to Shelly Lin [:shelly] from comment #194)
> Created attachment 760812 [details] [diff] [review]
> Part0: Modify MediaSegment and AudioSegment
>
> Carry r+ from :roc.
>
> Nit: Clean up the error(warning at local) of "comparison between signed and
> unsigned integer" on try-server.
Assignee | ||
Comment 196•11 years ago
|
||
Carry r+ from roc.
Attachment #759628 -
Attachment is obsolete: true
Attachment #760815 -
Flags: review+
Assignee | ||
Comment 197•11 years ago
|
||
Carry r+ from roc.
Nit: Clean up the error(warning at local) of "comparison between signed and unsigned integer" on try-server.
Attachment #758418 -
Attachment is obsolete: true
Attachment #760817 -
Flags: review+
Assignee | ||
Comment 198•11 years ago
|
||
Carry r+ from roc and tterribe.
Nit: Clean up the error(warning at local) of "comparison between signed and unsigned integer" on try-server. Include the header file of "nsString.h" instead of "nsAString.h".
Attachment #758932 -
Attachment is obsolete: true
Attachment #760820 -
Flags: review+
Assignee | ||
Comment 199•11 years ago
|
||
Carry r+ from roc and tterribe.
Nit: Expose some ogg symbols to |layout/media/symbols.def.in|.
Attachment #759630 -
Attachment is obsolete: true
Attachment #760822 -
Flags: review+
Assignee | ||
Comment 200•11 years ago
|
||
Fixed checkin comment.
Attachment #760812 -
Attachment is obsolete: true
Attachment #760825 -
Flags: review+
Assignee | ||
Comment 201•11 years ago
|
||
Fixed checkin comment.
Attachment #760815 -
Attachment is obsolete: true
Attachment #760826 -
Flags: review+
Assignee | ||
Comment 202•11 years ago
|
||
Fixed checkin comment.
Attachment #760822 -
Attachment is obsolete: true
Attachment #760827 -
Flags: review+
Assignee | ||
Comment 203•11 years ago
|
||
Fixed checkin comment.
Attachment #760817 -
Attachment is obsolete: true
Attachment #760829 -
Flags: review+
Assignee | ||
Comment 204•11 years ago
|
||
Fixed checkin comment.
Attachment #760820 -
Attachment is obsolete: true
Attachment #760830 -
Flags: review+
Assignee | ||
Comment 205•11 years ago
|
||
Carry r+ from roc.
Fixed checkin comment.
Attachment #760259 -
Attachment is obsolete: true
Attachment #760831 -
Flags: review+
Assignee | ||
Comment 206•11 years ago
|
||
Assignee | ||
Updated•11 years ago
|
Keywords: checkin-needed
Comment 207•11 years ago
|
||
https://hg.mozilla.org/projects/birch/rev/e59ac8e0e410
https://hg.mozilla.org/projects/birch/rev/6e7e12d2fcf4
https://hg.mozilla.org/projects/birch/rev/f0c287193841
https://hg.mozilla.org/projects/birch/rev/926801efe857
https://hg.mozilla.org/projects/birch/rev/bf6d5b00ebe2
https://hg.mozilla.org/projects/birch/rev/c422cf48657e
Tests?
Flags: in-testsuite?
Keywords: checkin-needed
Comment 208•11 years ago
|
||
https://hg.mozilla.org/mozilla-central/rev/e59ac8e0e410
https://hg.mozilla.org/mozilla-central/rev/6e7e12d2fcf4
https://hg.mozilla.org/mozilla-central/rev/f0c287193841
https://hg.mozilla.org/mozilla-central/rev/926801efe857
https://hg.mozilla.org/mozilla-central/rev/bf6d5b00ebe2
https://hg.mozilla.org/mozilla-central/rev/c422cf48657e
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla24
Assignee | ||
Updated•11 years ago
|
Summary: Implement MediaEncoder framework encoding pipeline → Implement the framework and encoding pipeline of MediaEncoder
Updated•11 years ago
|
Whiteboard: [qa-]
Updated•11 years ago
|
Blocks: MediaRecording
Assignee | ||
Updated•11 years ago
|
Whiteboard: [qa-] → [qa-][MR1.2]
Updated•11 years ago
|
Updated•11 years ago
|
blocking-b2g: --- → koi+
Whiteboard: [MR1.2] → [MR1.2][FT: Media Recording], [Sprint: 2]
Comment 209•11 years ago
|
||
Verified smoketests are passing for media recorder.
Status: RESOLVED → VERIFIED
Keywords: verifyme
Updated•11 years ago
|
status-firefox26:
--- → fixed
Updated•11 years ago
|
Component: Video/Audio → Video/Audio: Recording
Updated•11 years ago
|
No longer blocks: MediaRecording
You need to log in
before you can comment on or make changes to this bug.
Description
•