Open Bug 864522 Opened 12 years ago Updated 2 years ago

Improve GrallocTextureHostOGL's usage of GL textures/EGL images

Categories

(Core :: Graphics: Layers, defect)

ARM
Gonk (Firefox OS)
defect

Tracking

()

People

(Reporter: vlad, Unassigned)

Details

Right now, GrallocTextureHostOGL blows away the GL texture and EGL image every time SwapTextureImpl is called, and recreates them. This is tricky to cache, since we might be rotating between 2-3 different buffers for a single GrallocTextureHost. In an ideal world, we'd tell it how many buffers we'd be rotating through (e.g. 3), and it would cache the texture/images for those. If it needs to create one for a new buffer, it just throws away the oldest. There may be other/better ways of solving this, too -- for example, giving the host the array of surfaces we want to swap between up front, and then selecting by index.
Side note: only the EGLImage's really needs to be created for each SurfaceDescriptor. The GL texture object could be unique. It's not clear to me whether we are guaranteed to have only a very small number of different SurfaceDescriptors (like 2-3 as you discuss) or if we could have significantly more (I'm thinking about playing back video or WebRTC) or even an infinite sequence of everytime-different SurfaceDescriptors (if video or WebRTC doesn't just cycle through a finite list of SurfaceDescriptors ???) Input from people knowing video and WebRTC would be very useful here.
Flags: needinfo?(kchen)
Flags: needinfo?(nical.bugzilla)
On desktop the size of the media queue is 10 so when playing video, we have always 11 descriptors in use (per playing video). On B2G the size of the media queue is 2 (so 3 surface descriptors alive per video element). I don't know about WebRTC but I suspect it would use the same mechanism. fwiw, in Bug 858914 I am rewriting a bit of the TextureClient/TextureHost logic so that each TextureClient/Host pair refer to one and only one surface descriptor which is shared between each side. In order to use a different surface descriptor, one will have to create a new TextureClient/Host pair (a bit similar to what we do for content now). I think this will simplify things quite a bit in this area.
Flags: needinfo?(nical.bugzilla)
(In reply to Benoit Jacob [:bjacob] from comment #1) > Side note: only the EGLImage's really needs to be created for each > SurfaceDescriptor. The GL texture object could be unique. > > It's not clear to me whether we are guaranteed to have only a very small > number of different SurfaceDescriptors (like 2-3 as you discuss) or if we > could have significantly more (I'm thinking about playing back video or > WebRTC) or even an infinite sequence of everytime-different > SurfaceDescriptors (if video or WebRTC doesn't just cycle through a finite > list of SurfaceDescriptors ???) Input from people knowing video and WebRTC > would be very useful here. For camera and hw-video, we only use a finite set of SurfaceDescriptors (9-12). I'm not sure how WebRTC does here. I guess they aren't using gralloc yet.
Flags: needinfo?(kchen) → needinfo?(chung)
WebRTC does not use gralloc buffers currently. To be more precise, GetUserMedia camera use Camera API directly, so if we display the MediaStream from GetUserMedia, the buffer amount is finite. If we get the MediaStream from PeerConnection, we do not use gralloc buffer now.
Flags: needinfo?(chung)
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.