Closed Bug 858914 Opened 12 years ago Closed 11 years ago

Fix TextureClient/TextureHost's ownership model

Categories

(Core :: Graphics: Layers, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla25

People

(Reporter: nical, Assigned: nical)

References

(Blocks 1 open bug)

Details

(Keywords: regression)

Attachments

(11 files, 16 obsolete files)

(deleted), image/jpeg
Details
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
mattwoodrow
: review+
Details | Diff | Splinter Review
(deleted), patch
bas.schouten
: review+
Details | Diff | Splinter Review
(deleted), patch
mattwoodrow
: review+
Details | Diff | Splinter Review
(deleted), patch
bas.schouten
: review+
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
jrmuizel
: review+
Details | Diff | Splinter Review
(deleted), patch
bas.schouten
: review+
nrc
: review+
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
At the end of the refactoring it turned out that the TextureClient/TextureHost ownership model of shared data is not quite right for every situation. We ended up fixing some cases with hacks in order to not postpone landing the refactoring into mozilla-central. The new model will be based on actual sharing of the data (with Lock/Unlock semantics) rather than sending data back and forth on each paint. Among other things, this will let us optimize texture uploads a bit, and make the tear-down sequence much simpler.
(Came here from bug 864522.) Hmm, in that other bug, you say: > fwiw, in Bug 858914 I am rewriting a bit of the TextureClient/TextureHost > logic so that each TextureClient/Host pair refer to one and only one surface > descriptor which is shared between each side. In order to use a different > surface descriptor, one will have to create a new TextureClient/Host pair (a > bit similar to what we do for content now). I think this will simplify > things quite a bit in this area. The nice thing about TextureHost/TextureClient managing multiple buffers is that on the compositor side, the buffer swaps could happen asynchronously -- e.g., in the webgl case, the compositor could spin up a thread on which it waits for rendering to finish, then move to that buffer in the TextureHost, without needing an explicit transaction coming in from content (other than the one to kick things off). In other words: 1. Content tells Compositor "For TextureStream 1, I just drew into buffer B" 2. Compositor gets this message, spins up a thread to wait for rendering to finish into B 3. Compositor keeps rendering buffer A (e.g. for panning/zooming/etc.) 4. Thread notifies compositor that rendering into B is finished 5. Compositor redraws, but using B now Video could use this same approach, bust just with precise timing info. That seems like it would be harder to do if there's only 1 buffer managed by each TextureHost/Client, because it would require swapping the TextureHost in use by a layer. I guess a layer could manage multiple TextureCliens/Hosts and do this at a higher level, but then it would be layer-specific for something that could be general.
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #1) > The nice thing about TextureHost/TextureClient managing multiple buffers is > that on the compositor side, the buffer swaps could happen asynchronously -- > e.g., in the webgl case, the compositor could spin up a thread on which it > waits for rendering to finish, then move to that buffer in the TextureHost, > without needing an explicit transaction coming in from content (other than > the one to kick things off). In other words: > > 1. Content tells Compositor "For TextureStream 1, I just drew into buffer B" > 2. Compositor gets this message, spins up a thread to wait for rendering to > finish into B > 3. Compositor keeps rendering buffer A (e.g. for panning/zooming/etc.) > 4. Thread notifies compositor that rendering into B is finished > 5. Compositor redraws, but using B now This can be done by having several TextureClient/Host. The advantage here is that you can manage this explicitly because when you talk to a TextureHost you know what data it points to, whereas with the current model you don't really know what's happenning inside TextureHost so Compositables assume that their TextureHost is (or is not) doing things like buffering which is prone to errors, and also the management of gralloc buffers and vidoe frames becomes hellish. > > Video could use this same approach, bust just with precise timing info. > That seems like it would be harder to do if there's only 1 buffer managed by > each TextureHost/Client, because it would require swapping the TextureHost > in use by a layer. I guess a layer could manage multiple > TextureCliens/Hosts and do this at a higher level, but then it would be > layer-specific for something that could be general. It would be the compositable rather than the Layer, and Compositables are already assuming the behavior of underlying their texture hosts, which is dependent on a bunch of things (so it's a fragile). So moving the swap logic down to the compositable is not too bad (and has the merit of being explicit). If it turns out that we implement the same swap logic in different types of compositables we can wrap it in a separate class or in a bunch of reusable functions that would manage the different TextureHosts. I am not too worried about implementing this in the new model (maybe it's because I explain the new model not clearly enough). And even if it is a bit more complicated here I am willing to do this tradeoff because the wins in memory management are quite big (we have a lot of complication in this area right now, especially on b2g). The video case may be a bit tedious because playback rate may change anytime so presentation times would become wrong if we send the frames in advance (or maybe we can just send one frame in advance and not care if it is presented at the wrong time) Also with this model you don't have to upload the texture during the transaction which can be a performance win (since in synchronous transactions the main thread is waiting for the transaction to finish, so uploading there is kind of bad)
Depends on: 874726
Blocks: 875168
Attached patch New TextureClient/Host (WIP) (obsolete) (deleted) — Splinter Review
current state of the new TextureClient/Host model. This is not finished and I primarily put it here in case something stupid happens like my laptop breaks. Before I can get something landable I still need to sort out the different tear-down scenarios, implement the new gralloc TextureClient/Host, clean things up and add some documentation. The initial patch will probably only use the new stuff for image layers so that the folks pending on this for b2g video stuff don't stay blocked for too long, and the rest will follow.
Attached patch New TextureClient/Host (WIP) (obsolete) (deleted) — Splinter Review
Still a work in progress, now I have video frames displayed with gralloc, but I haven't sorted out the lifetime of gralloc buffers yet. Also still need to add support for shared textures.
Attachment #754930 - Attachment is obsolete: true
Attached patch New TextureClient/Host (WIP) (obsolete) (deleted) — Splinter Review
still WIP, gralloc and shared texture still not functional yet
Attachment #758108 - Attachment is obsolete: true
Comment on attachment 760925 [details] [diff] [review] New TextureClient/Host (WIP) although it's still a work in progress, the patch is getting big so it would probably be good for people likely to review it to start getting an idea of what's going on in there. Nick and Bas you are certainly the best reviewers for this code. So at this point the code is not shippable but the main changes are in place: - the current TextureClient becomes LegacyTextureClient - the current TextureHost becomes LegacyTextureHost - these two will desappear as soon as we move everything to the new model - new TextureClient and TextureHost implementations are named TextureClient/TextureHost - TextureHost and TextureSource become separated objects (rather than TextureHost being a TextureClient) - TextureHost exposes a list of TextureClients rather than a "meta-TextureClient" that may have sub-textures. - TextureClient/Host pair are associated with _one_ buffer that is shared, rather than having several buffers going back and forth between texture client and host. - Texture uploads happen outside of the transaction (in Lock) - TextureHostShmem is cross-platform, and has a DataTextureSource (a TextureSource that can upload a Texture from a DataSourceSurface, which is backend specific) - YCbCrtextureHostOGL doesn't exist anymore (the shmem/memory TextureClient/Host can handle YCbCr just like RGBA) - there is an abstraction Texture{Client/Host}Buffer that is used for Textures that are mapped in memory (Shmem or RAM) to share the code between MemoryTexture and ShmemTexture - we don't Use AutoOpenSurface with Surface descriptors, but ImageDataSerializer/Deserializer which is much simpler and uses moz2D instead of thebes. - we don't use GrallocBufferActor with the new texture stuff, instead the texture client/host wrap the GraphicBuffer directly. - Shared Image classes use TextureClients rather than manipulating shmem/Gralloc directly, in the future we may even be able to remove some of the Image classes since most of their differences are already abstracted out by the underlying TextureClient. I still need to fix D3D textures, properly test gralloc, and finish implementing shared gl textures. All of this is only used for ImageLayers right now (the rest of the layers still use the Legacytexture classes). I plan to land it for just ImageLayer first and then convert other layer classes as followups. There is probably some naming to debate, and also some cleaning up to do so please ignore printfs and trailing spaces and other style infringements for now.
Attachment #760925 - Flags: feedback?(ncameron)
Attachment #760925 - Flags: feedback?(bas)
There are a lot of modified files which are only there because of renaming the old textures into LegacyTexture[...], the most interesting files to look at right now are: - Texture{Client/Host}.{h/cpp} - Texture{Client/Host}OGL.{h/cpp} - CompositableTransactionParent.cpp - ImageClient.cpp
I am trying to exercise the different gralloc code path to see if and how much I broke things. It turns out that right now, watching a WebM video in the browser uses the shmem code path, is this intentional? More generally, Peter, can you point me to the different ways to exercise the GrallocYCbCrImage and GonkIOSurface (is the later only for camera?) code paths?
Flags: needinfo?(pchang)
(In reply to Nicolas Silva [:nical] from comment #6) > Comment on attachment 760925 [details] [diff] [review] > New TextureClient/Host (WIP) > > although it's still a work in progress, the patch is getting big so it would > probably be good for people likely to review it to start getting an idea of > what's going on in there. Nick and Bas you are certainly the best reviewers > for this code. > > So at this point the code is not shippable but the main changes are in place: > - the current TextureClient becomes LegacyTextureClient > - the current TextureHost becomes LegacyTextureHost > - these two will desappear as soon as we move everything to the new model > - new TextureClient and TextureHost implementations are named > TextureClient/TextureHost It's probably too late for this, but I find it easier to have NewTextureClient etc. and not rename the old one, rather than renaming the old versions. Then do a rename once everything is in place. The rename then acts as a sanity check on the changes. > - TextureHost and TextureSource become separated objects (rather than > TextureHost being a TextureClient) I assume you mean s/TextureClient/TextureSource > - TextureHost exposes a list of TextureClients rather than a > "meta-TextureClient" that may have sub-textures. and here too? > - TextureClient/Host pair are associated with _one_ buffer that is shared, > rather than having several buffers going back and forth between texture > client and host. please clarify 'buffer' > - Texture uploads happen outside of the transaction (in Lock) \o/ > - TextureHostShmem is cross-platform, and has a DataTextureSource (a > TextureSource that can upload a Texture from a DataSourceSurface, which is > backend specific) How were TextureHostShmems not cross-platform before? > - YCbCrtextureHostOGL doesn't exist anymore (the shmem/memory > TextureClient/Host can handle YCbCr just like RGBA) > - there is an abstraction Texture{Client/Host}Buffer that is used for > Textures that are mapped in memory (Shmem or RAM) to share the code between > MemoryTexture and ShmemTexture I don't like the buffer name (but I like the idea), maybe use MemoryTexture for the abstract class and LocalMemoryTexture and SharedMemoryTexture for the concrete classes? > - we don't Use AutoOpenSurface with Surface descriptors, but > ImageDataSerializer/Deserializer which is much simpler and uses moz2D > instead of thebes. > - we don't use GrallocBufferActor with the new texture stuff, instead the > texture client/host wrap the GraphicBuffer directly. good > - Shared Image classes use TextureClients rather than manipulating > shmem/Gralloc directly, in the future we may even be able to remove some of > the Image classes since most of their differences are already abstracted out > by the underlying TextureClient. > > I still need to fix D3D textures, properly test gralloc, and finish > implementing shared gl textures. > > All of this is only used for ImageLayers right now (the rest of the layers > still use the Legacytexture classes). I plan to land it for just ImageLayer > first and then convert other layer classes as followups. > > There is probably some naming to debate, and also some cleaning up to do so > please ignore printfs and trailing spaces and other style infringements for > now. Sounds awesome, I'll take a look through the code in a little while.
(In reply to Nick Cameron [:nrc] from comment #9) > > - new TextureClient and TextureHost implementations are named > > TextureClient/TextureHost > > It's probably too late for this, but I find it easier to have > NewTextureClient etc. and not rename the old one, rather than renaming the > old versions. Then do a rename once everything is in place. The rename then > acts as a sanity check on the changes. I originally had NewTextureClient & TextureClient and the others asked that I moved to TextureClient & LegacyTextureClient so as to have less polution of the history. > > > - TextureHost and TextureSource become separated objects (rather than > > TextureHost being a TextureClient) > > I assume you mean s/TextureClient/TextureSource yes, sorry > > > - TextureHost exposes a list of TextureClients rather than a > > "meta-TextureClient" that may have sub-textures. > > and here too? yes > > > - TextureClient/Host pair are associated with _one_ buffer that is shared, > > rather than having several buffers going back and forth between texture > > client and host. > > please clarify 'buffer' buffer as in chunk of memory to back the texture from an IPC point of view: Shmem, pointer to RAM, gralloc buffer, shared texture, etc. > > - TextureHostShmem is cross-platform, and has a DataTextureSource (a > > TextureSource that can upload a Texture from a DataSourceSurface, which is > > backend specific) > > How were TextureHostShmems not cross-platform before? It was TextureImageTextureHostOGL and TextureHostShmemD3D11, there was no backend-independent TextureHostShmem. > I don't like the buffer name (but I like the idea), maybe use MemoryTexture > for the abstract class and LocalMemoryTexture and SharedMemoryTexture for > the concrete classes? Good point. This time, I'll write down all the names that are subject to debate and change it after I am done with the patch's logic when I am sure we reach a consensus (actually I should have done that for NewTextureHost vs LegacyTextureHost)
Attached patch New TextureClient/Host (WIP) (obsolete) (deleted) — Splinter Review
I won't add the feedback request each time I update the patch to avoid the email churn
Attachment #760925 - Attachment is obsolete: true
Attachment #760925 - Flags: feedback?(ncameron)
Attachment #760925 - Flags: feedback?(bas)
Comment on attachment 762155 [details] [diff] [review] New TextureClient/Host (WIP) Review of attachment 762155 [details] [diff] [review]: ----------------------------------------------------------------- Looking good so far, a few comments below and there are a load of nits which I've ignored for now. I really think this should have been done as several smaller refactorings - that was an important lesson from the layers refactoring, I think. It seems to me that there are a lot of independent things going on here and it would be easier to review and easier for others to get to know the code if they happened gradually and separately. Also, much less risk. Still I think all these changes are for the better, so I am glad they are getting done :-) ::: gfx/layers/client/ImageClient.cpp @@ +108,5 @@ > + if (image->AsTextureClient()) { > + // fast path: no need to allocate and/or copy image data > + printf_stderr("UpadteImage: preallocated TextureClient\n"); > + RefPtr<TextureClient> tex = image->AsTextureClient(); > + MOZ_ASSERT(tex); totally unnecessary :-) @@ +132,5 @@ > + printf_stderr("no client front buffer, creating it...\n"); > + mFrontBuffer = CreateTextureClient(TEXTURE_SHMEM, gfx::FORMAT_YUV); > + gfx::IntSize ySize(data->mYSize.width, data->mYSize.height); > + gfx::IntSize cbCrSize(data->mCbCrSize.width, data->mCbCrSize.height); > + if (!mFrontBuffer->AsTextureClientYCbCr()->AllocateForYCbCr(ySize, cbCrSize)) { Can we move texture client create to Image so we avoid having all this image knowledge in the image client? Or is that insane? (Then the fast path can be transparent to the image client) @@ +234,3 @@ > return false; > } > + bool status = mFrontBuffer->AsTextureClientSurface()->UpdateSurface(pattern, aContentFlags); passing a pattern to the texture client feels wrong, but I can't really say why. I guess I don't think it is either the ImageClient or the TextureClient's business to know about drawing? Can we call back to the image to do the drawing somehow? Could it keep mFilter and surface? I guess together with taking care of texture client creation this would be making Image a bit more heavy weight, but maybe that is OK (to be clear, they shouldn't really know about texture client creation and implementation details, but I think it is ok for images to know about drawing and to 'cache' a texture client). @@ +263,5 @@ > void > +ImageClientSingle::Detach() > +{ > + mFrontBuffer = nullptr; > + // TODO[nical] what could happen is that there are pending textures for destruction at this moment, would leak We have patterns for this with ContentClient/Hosts - I wonder if we can pull out the double buffering code into something a bit more abstract to share code better? #vagueideasthatprobablywontwork ::: gfx/layers/client/TextureClient.h @@ +203,5 @@ > + { > + return mID; > + } > + > + void SetNextSibling(TextureClient* aNext) Why do we track texture client siblings? ::: gfx/layers/composite/TextureHost.h @@ +488,5 @@ > + > + uint64_t GetID() const { return mID; } > + > + /** > + * LegacyTextureHosts are kept as a linked list in their compositable Why? Should it not be up to the compositable how it stores its texture hosts? ::: gfx/layers/ipc/CompositableTransactionParent.cpp @@ +29,2 @@ > bool > CompositableParentManager::ReceiveCompositableUpdate(const CompositableOperation& aEdit, btw, when did this change name and why? I think I prefer the old name, but can we at least keep the class name and file name in sync please? @@ +217,5 @@ > + > + RefPtr<TextureHost> texture = compositable->GetLegacyTextureHost(op.textureID()); > + MOZ_ASSERT(texture); > + > + TextureRemovalFlags flags = op.removeFlags() & TEXTURE_REMOVE_DEFAULT When does the client want to override the removal flags on the texture? This gives me the fear a little bit.
Attachment #762155 - Flags: feedback+
(In reply to Nick Cameron [:nrc] from comment #12) > Comment on attachment 762155 [details] [diff] [review] > New TextureClient/Host (WIP) > > Review of attachment 762155 [details] [diff] [review]: > ----------------------------------------------------------------- > > Looking good so far, a few comments below and there are a load of nits which > I've ignored for now. > > I really think this should have been done as several smaller refactorings - > that was an important lesson from the layers refactoring, I think. It seems > to me that there are a lot of independent things going on here and it would > be easier to review and easier for others to get to know the code if they > happened gradually and separately. Also, much less risk. Still I think all > these changes are for the better, so I am glad they are getting done :-) Yes, the thing here is that I'd either change the semantic of the current TextureClient/Host and do it all at once for all layer types, or have two implementation and the new one would gradually replace the old. I think doing the two implementations side-by-side is the best way in this case, but as soon as you start rewriting something from scratch it is really tempting to implement a lot of different features of this thing in a different and better way than what it was (for each feature implementing it the old or the new way is more or less the same cost for me). So for most of these things I kinda had to implement them along with the Texture classes. There are a few like "gralloc without the actor" that I can take out of the patch, but then they will be used only when the texture patch is applied. So, I admit I mostly took the path of making easier for me to write rather than making it easier to review. It is not too late to split it in different patches for review (actually I originally was doing that, but rebasing was horrible so I ended up making it one patch). but all these patches will have to be applied at the same time. > Can we move texture client create to Image so we avoid having all this image > knowledge in the image client? Or is that insane? (Then the fast path can be > transparent to the image client) > > @@ +234,3 @@ > > return false; > > } > > + bool status = mFrontBuffer->AsTextureClientSurface()->UpdateSurface(pattern, aContentFlags); > > passing a pattern to the texture client feels wrong, but I can't really say > why. I guess I don't think it is either the ImageClient or the > TextureClient's business to know about drawing? Can we call back to the > image to do the drawing somehow? Could it keep mFilter and surface? I guess > together with taking care of texture client creation this would be making > Image a bit more heavy weight, but maybe that is OK (to be clear, they > shouldn't really know about texture client creation and implementation > details, but I think it is ok for images to know about drawing and to > 'cache' a texture client). I am not found of TextureClient taking a pattern either, but this way is the one that had me rewrite least code for this particular thing. I was planning to think more about it later and fix it in another patch, eithe rin the process of Moz2Dification, or when unifying some of the Image classes that are becomeing redundant. My current thinking is that the notion of Image has mostly become redundant now that we have TextureClient. Image as a subset of the role of TextureClient, but only in the case of ImageLayer. so if we can have ImageClient take TextureClient as input, we reduce the difference between the interface of ImageClient and the other compositable classes', which is a nice side effect. But fixing this now would contribute to the feature creep that I already drove this patch into as you pointed out earlier. Note that with this patch, ImageClient already has much less knowledge of Image types (since all the shared types of Image now are handled as texture clients). > > @@ +263,5 @@ > > void > > +ImageClientSingle::Detach() > > +{ > > + mFrontBuffer = nullptr; > > + // TODO[nical] what could happen is that there are pending textures for destruction at this moment, would leak > > We have patterns for this with ContentClient/Hosts - I wonder if we can pull > out the double buffering code into something a bit more abstract to share > code better? #vagueideasthatprobablywontwork The problem here is what happens if the TextureClient outlives the compositable (cf video pipeline)? when the compositable is gone we don't have a way to talk to the texture host anymore. This is why there is this default removal flag that can e overridden, so that we can do something specific to the texture host if the compositable dies before the texture pair. I don't like it at all bu it's hard to cover all the tricky cases of the way we share shared buffers (gralloc in particular). One way to fix it would be to resurect PTexture so that texture client and host could be independent of their compositables. This probably also the sanest way to implement a few of the things we will want in the future, like sharing texture between several layers for texture atlases or when a video stream is displayed in several elements at the same time, but again I have postponed it to avoid doing a second layer refactoring. Granted, though, that this removal flag that can be overridden is not a satisfying solution. > > ::: gfx/layers/composite/TextureHost.h > @@ +488,5 @@ > > + > > + uint64_t GetID() const { return mID; } > > + > > + /** > > + * LegacyTextureHosts are kept as a linked list in their compositable > > Why? Should it not be up to the compositable how it stores its texture hosts? I made TextureClient and TextureHost linked lists to move managing them in the base Compositable class as much as possible rather than redoing the same thing in all Compositable implementations (and make sure that Add/Remove texture has the same semantic everywhere). > > ::: gfx/layers/ipc/CompositableTransactionParent.cpp > @@ +29,2 @@ > > bool > > CompositableParentManager::ReceiveCompositableUpdate(const CompositableOperation& aEdit, > > btw, when did this change name and why? I think I prefer the old name, but > can we at least keep the class name and file name in sync please? I think this has always had this name. I meant to go through all the similar naming problems and fix them by the end of the refactoring, and then we got very busy, found out that having people to agree on names is tough (almost week of debate for the LayerHost thing, that ended as let's keep it unchanged for now), and forgot about this one. I have other naming problems in mind that we'll have to fix eventually.
Blocks: 889208
This part has changed a little bit (TextureHosts used to inherit from TextureSource) so here is a little hand-drawn diagram. I'll make a cleaner one for the wiki later. Also, TextureSource is now a list of TextureSource (it has a GetNextSibling() member). This is a cleaner way to express that TextureHost can give access to several TextureSources (like with YCbCr images) Ideally, all backends should implement a DataTextureSource. This way they support ShmemTextureHost, MemoryTextureHost for all image formats (including YCbCr) for free.
Attached patch New TextureClient/Host (WIP) (obsolete) (deleted) — Splinter Review
Still in progress, this version is broken somewhere around allocating texture client's shmem. But I have all or most of the logic written now ("just" need to clear some TODOs, fix breakage and cleanup). Feedbacks still welcome
Attachment #762155 - Attachment is obsolete: true
Attached patch New TextureClient/Host (WIP) (obsolete) (deleted) — Splinter Review
oops, uploaded the wrong patch
Attachment #771405 - Attachment is obsolete: true
This is an enormous patch :) Can we skype (along with whoever else is interested) on monday and get a quick overview on what the new model is?
The basic compositor backend that just landed does not use an interface for it's TextureSourceBasic, which make it complicated for me to port to the new architecture. This patch adds an interface for TextureSourceBasic (and previously named TextureSourceBasic class becomes TextureSourceHostBasic, since it is also a TextureHost and needs a new name).
Attachment #772032 - Flags: review?(matt.woodrow)
Comment on attachment 772032 [details] [diff] [review] Make basic textures use the same design pattern as other backends Review of attachment 772032 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/basic/BasicCompositor.cpp @@ +29,5 @@ > + > +/** > + * Texture source and host implementaion for software compositing. > + */ > +class TextureSourceHostBasic : public TextureHost Let's just go with TextureHostBasic. Much less of a mouthful.
Attachment #772032 - Flags: review?(matt.woodrow) → review+
Renames TextureClient into DeprecatedTextureClient, TextureHost into DeprecatedTextureHost, and applies that logic to derived classes. Not the nicest patch to review i fear, but it is mostly a sed script run on the tree, so no fiddly logic in here. And it's stuff that won't get in the way of reviewing the coming tricky patch. Note that all of the code that is touched here will be replaced by the new texture stuff soon so it doesn't matter if some names look awkward with "Deprecated" in the middle.
Attachment #772323 - Flags: review?(matt.woodrow)
(In reply to Nicolas Silva [:nical] from comment #22) > Created attachment 772323 [details] [diff] [review] > Part 2 - rename texture classes into DeprecatedTextureClient/Host > > Renames TextureClient into DeprecatedTextureClient, TextureHost into > DeprecatedTextureHost, and applies that logic to derived classes. You did mean "Legacy", rather than "Deprecated", right?
oops, wrong patch
Attachment #772323 - Attachment is obsolete: true
Attachment #772323 - Flags: review?(matt.woodrow)
Attachment #772331 - Flags: review?(matt.woodrow)
(In reply to Milan Sreckovic [:milan] from comment #23) > You did mean "Legacy", rather than "Deprecated", right? I meant Deprecated but uploaded an older version of the patch. This one uses the word Deprecated that fits better I think.
Attachment #772331 - Flags: review?(matt.woodrow) → review+
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla25
(In reply to Tom Schuster [:evilpie] from comment #26) > This looks wrong to me: > http://hg.mozilla.org/integration/mozilla-inbound/rev/1c95ddb6833a#l1.27 If you mean the indentation a bit off, I fixed in the patch that I just landed, but it's ok since this code will change in the very short term. If you are talking about having an interface for the TextureSourceBasic it is necessary because the texture model that is about to replace this one separates TextureHost and TextureSource into distinct objects.
There is more to come, sorry I forgot to add the [leave open] flag
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Whiteboard: [leave open]
This patch just makes CreateTexturedEffect use TextureSources rather than TextureHosts. It makes more sense since TextureHost should not take part of the actual compositing (which is TextureSource's job, while TextureHost abstracts out sharing between threads/process). More importantly it is needed for the new textures. It is an isolated change that is backward compatible with the deprecated texture stuff since DeprecatedTextureHost is a TextureSource. So I can take it out of the big patch and get it reviewed separately.
Attached patch Final part - Texture refactoring (WIP) (obsolete) (deleted) — Splinter Review
current state the software backend hasn't been ported properly yet, there are a few things to tidy up and since last rebase I have new crashes to fix. there's probably going to be a few other small parts that I can take out of the patch to ease review but right now I'd rather focus on getting it complete. Feedbacks always welcome, especially from future reviewers who can enjoy it as bedside reading, maybe.
Attachment #771475 - Attachment is obsolete: true
Attached patch Final part - Texture refactoring (WIP) (obsolete) (deleted) — Splinter Review
Getting closer and closer, still have stuff to do in the software backend, a warning on OSX to fix and some testing to do on B2G and windows.
Attachment #772649 - Attachment is obsolete: true
Depends on: 892505
Comment on attachment 772642 [details] [diff] [review] Part 3 - CreateTexturedEffect changes Review of attachment 772642 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/Effects.h @@ +229,5 @@ > + * aSource as the (first) texture source. > + * > + * Note that aFormat can be different form aSource->GetFormat if, we are > + * creating an effect that takes several texture sources (like with YCBCR > + * where aFormat would be FOMRAT_YCBCR and each texture source would be Nit: FORMAT_YUV. It would be nice if we only used one of these names everywhere
Attached patch Final part - Texture refactoring (WIP) (obsolete) (deleted) — Splinter Review
Basic backend working except when playing video: the area of the video frame is not invalidated.
Attachment #773709 - Attachment is obsolete: true
(In reply to Nicolas Silva [:nical] from comment #27) > https://hg.mozilla.org/integration/mozilla-inbound/rev/26f83c4cb81e Could we back this out until the new stuff lands too? I don't see any advantage to landing this until then and it is a pretty anti-social patch - it breaks any other patch which touches texture hosts or clients.
Sorry, about that. It's about who's life we want to make harder. The rest will land soon so the DeprecatedTextureFooBar classes will stick around anyway. also it is easier to run a quick sed command on the patch queue than it is for me to to look at logical changes in the code (the patch queue rebase work flow is not great for that). sed -i 's/TextureHost/DeprecatedTextureHost/g' .hg/patches/* sed -i 's/TextureClient/DeprecatedTextureClient/g' .hg/patches/* Should work for 99% of the cases (i just fixed up a few doc comments to remain TextureHost and TextureClient for clarity)
(In reply to Nicolas Silva [:nical] from comment #38) > Sorry, about that. It's about who's life we want to make harder. The rest > will land soon so the DeprecatedTextureFooBar classes will stick around > anyway. also it is easier to run a quick sed command on the patch queue than > it is for me to to look at logical changes in the code (the patch queue > rebase work flow is not great for that). > > sed -i 's/TextureHost/DeprecatedTextureHost/g' .hg/patches/* > sed -i 's/TextureClient/DeprecatedTextureClient/g' .hg/patches/* > > Should work for 99% of the cases (i just fixed up a few doc comments to > remain TextureHost and TextureClient for clarity) Fair enough. I don't mind so much for the d3d9 stuff - I will have to do a serious rebase at some point anyway, I guess. But it could affect a lot of people - I just reviewed a b2g patch which this breaks, which is why I came across the problem - I had a frustrating few minutes where I thought MXR was broken :-) What is the plan for the Deprecated* classes? How long will they stay in after the new things land?
The plan is to convert Compositable classes to the new textures one by one as followups (right now I only Have Image layers). It should be rather straightforward for most (or all) of them since the new textures are much easier to use than the deprecated ones, although i need to talk with the people who implemented some of the compositable classes to get a precise idea of how long it will take. I ported all the texture classes of the existing backends (except WebGL stream stuff where some of the logic needs to move into a SurfaceStramCompositable probably, but at the moment it's doing it's own thing so it is less urgent than ImageLayers and ThebesLayers stuff). depending on how many people work on converting compositable classes (the work can be done in parallel) it may be quickly done. I really want to see the Deprecated classes disappear asap. Once the compositable classes are converted the deprecated classes will just be removed (since they won't be used at all) This will give us a sane basis on top of which to implement some stuff that we want for better video frame synchronization and ways to share textures between layers.
Blocks: NewTextures
(In reply to Nicolas Silva [:nical] from comment #8) > I am trying to exercise the different gralloc code path to see if and how > much I broke things. It turns out that right now, watching a WebM video in > the browser uses the shmem code path, is this intentional? More generally, > Peter, can you point me to the different ways to exercise the > GrallocYCbCrImage and GonkIOSurface (is the later only for camera?) code > paths? Sorry for late reply. I just checked the code flow. GrallocYCbCrImage was used in Ogg reader(software decode) to save output video data,for example play *.ogv file. GonkIOSurface was used in OMX reader(hardware decode) or Camera preview. Therefore, you can use camera preview and play ogg format video for verification. Btw, after checking the implementation of both classes, I think we could merge them to single class. And I created bug 894262 for it.
Flags: needinfo?(pchang)
(In reply to peter chang[:pchang] from comment #41) > Sorry for late reply. I just checked the code flow. > > GrallocYCbCrImage was used in Ogg reader(software decode) to save output > video data,for example play *.ogv file. > GonkIOSurface was used in OMX reader(hardware decode) or Camera preview. > > Therefore, you can use camera preview and play ogg format video for > verification. Ok cool thanks! > > Btw, after checking the implementation of both classes, I think we could > merge them to single class. And I created bug 894262 for it. I thought so too but wasn't sure. I am pulling out the gralloc changes into a separate patch to ease up review and so that we can review and land the new textures asap (preffed off at least on B2G) while I haven't done all the proper testing on B2G, and land a clean gralloc implementation separately.
Attached patch Part 4 - Texture refactoring + OGL backend (obsolete) (deleted) — Splinter Review
Here is the latest version of the textures refactoring, hidden behind the pref "layers.use-deprecated-textures" (new textures are off by default) D3D11 & Basic backends and Gralloc stuff are in separate patches
Attachment #774053 - Attachment is obsolete: true
Attachment #776494 - Flags: feedback?(ncameron)
Attachment #776494 - Flags: feedback?(matt.woodrow)
Attachment #776494 - Flags: feedback?(bgirard)
Attachment #776494 - Flags: feedback?(bas)
Attached patch Part 5 - Basic backend (deleted) — Splinter Review
Attached patch Part 6 - D3D11 backend (deleted) — Splinter Review
Attachment #776497 - Flags: review?(bas)
Attached patch Part 7 - Gralloc changes (WIP) (obsolete) (deleted) — Splinter Review
I am not done with this one, it needs some fixing up and testing.
As soon as we decide that we can use the new textures for image layers and drop the deprecated textures there, we can remove all of this code. this will make me extremely happy. So here is a patch to show how motivating it is to move to the new texture architecture (and this is just for ImageLayer, there is more to remove when we support the new textures for other layer types).
In order for the old and new texture stuff to coexist I needed to rename ImageClient old classes into DeprecatedImageClient and do the same for SharedPlanarYCbCrImage and SharedRGBImage.
Comment on attachment 776494 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend Review of attachment 776494 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/client/ImageClient.h @@ +63,5 @@ > > /** > + * An image client which uses a single texture client. > + */ > +class Image_Client_Single : public ImageClient oops, I forgot to fix this name up. I corrected it locally.
Comment on attachment 776494 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend Review of attachment 776494 [details] [diff] [review]: ----------------------------------------------------------------- There was some random stuff we found while looking through. We have some other questions that we can talk about in the meeting tomorrow: Why a linked list of TextureSources? Why treat YCbCr as three buffers? GetSubSource() seems like an array implemented as a linked list Is doing the YCbCr stuff together the right thing? How do we avoid racing during update? Why do we store the buffer info in the shared memory? Talk about Disconnect vs. Detach() can we disambiguate their names more? TextureIDs? ::: gfx/gl/GLContext.cpp @@ +19,5 @@ > #include "prenv.h" > #include "prlink.h" > #include "SurfaceStream.h" > +#include "mozilla/gfx/2D.h" > +#include "gfx2DGlue.h" What is this needed for? ::: gfx/layers/CompositorTypes.h @@ +203,5 @@ > * > * See ShadowLayerForwarder::OpenDescriptor for example. > */ > enum OpenMode { > + OPEN_CLOSED = 0x0, What is this for? ::: gfx/layers/client/ClientCanvasLayer.h @@ +36,5 @@ > MOZ_COUNT_DTOR(ClientCanvasLayer); > + if (mCanvasClient) { > + mCanvasClient->Detach(); > + mCanvasClient = nullptr; > + } Why's this here? ::: gfx/layers/client/ImageClient.cpp @@ +88,5 @@ > + } > + > + if (image->AsTextureClient()) { > + // fast path: no need to allocate and/or copy image data > + printf_stderr("UpadteImage: preallocated TextureClient\n"); These printfs probably needs to go. ::: gfx/layers/client/TextureClient.cpp @@ +100,5 @@ > +{ > + printf_stderr(" -- MemoryTextureClient(%p)::Allocate %i\n", this, (int)aSize); > + MOZ_ASSERT(!mBuffer); > + mBuffer = new uint8_t[aSize]; > + memset(mBuffer, 0, aSize); Let's not do this. The callers can do this as needed. Except A8 on OS X (see the current in-tree allocator) @@ +121,5 @@ > + return false; > + } > + > + nsRefPtr<gfxContext> tmpCtx = new gfxContext(surf.get()); > + tmpCtx->SetOperator(gfxContext::OPERATOR_SOURCE); Can we use a CopySurface() helper here. There may be one already, otherwise create one. @@ +138,5 @@ > + if (!Allocate(bufSize)) { > + return false; > + } > + ImageDataSerializer serializer(GetBuffer()); > + serializer.InitializeBufferInfo(aSize, mFormat); Rather than constructing two ImageDataSerializers (one here, and one in the previous function) for different purposes, how about storing aSize as mSize, and doing both operations in the previous function. Same thing for the YCbCr version. ::: gfx/layers/client/TextureClient.h @@ +144,5 @@ > + TextureFlags mFlags; > +}; > + > +/** > + * DeprecatedTextureClient that wraps a random access buffer such as a Shmem or raw memory. Is DeprecatedTextureClient correct? @@ +193,5 @@ > + gfx::SurfaceFormat mFormat; > +}; > + > +/** > + * DeprecatedTextureClient that wraps shared memory. DeprecatedTextureClient ::: gfx/layers/composite/ImageHost.cpp @@ +40,5 @@ > + const nsIntRegion* aVisibleRegion, > + TiledLayerProperties* aLayerProperties) > +{ > + if (!GetCompositor()) { > + // should only happen during tabswitch if async-video is still sending frames. I understand why the compositor goes away during tabswitch. ::: gfx/layers/composite/TextureHost.cpp @@ +94,5 @@ > + case LAYERS_BASIC: > + return CreateBackendIndependentTextureHost(aID, > + aDesc, > + aDeallocator, > + aFlags); Is this used for more backends than LAYERS_BASIC? ::: gfx/layers/composite/TextureHost.h @@ +206,5 @@ > + */ > + uint32_t GetUpdateSerial() const { return mUpdateSerial; } > + void SetUpdateSerial(uint32_t aValue) { mUpdateSerial = aValue; } > + > + // By default at least set the update serial tp zero. typo 'tp' @@ +238,5 @@ > + * > + * A TextureHost implementation corresponds to one SurfaceDescriptor type, as > + * opposed to TextureSource that corresponds to device textures. > + * This means that for YCbCr planes, even though they are represented as > + * 3 textures internally (3 TextureSources), we have use 1 TextureHost and not 3, "we have use 1" doesn't make sense. @@ +244,5 @@ > + * they are uploaded separately. > + * > + * There is always one and only one TextureHost per TextureClient, and the > + * TextureClient/Host pair only owns one buffer of image data through its > + * lifetime. This means that to the lifetime of the underlying shared data s/This means that to the lifetime/This means that the lifetime/ @@ +285,5 @@ > + > + /** > + * Note that the texture host format can be different from its corresponding > + * texture source's. For example a ShmemTextureHost can have the ycbcr > + * format and produce 3 "aplha" textures sources. "alpha" @@ +296,5 @@ > + * This can trigger texture uploads, so do not call it inside transactions > + * so as to not upload textures while the main thread is blocked. > + * Must not be called while this TextureHost is not sucessfully Locked. > + */ > + virtual NewTextureSource* GetTextureSource() = 0; Matt thinks this should be called GetFirstTextureSource(). Unless you read the comment it's not clear that this returns a list. @@ +299,5 @@ > + */ > + virtual NewTextureSource* GetTextureSource() = 0; > + > + /** > + * Is before compositing if the shared data has changed since last Is called ::: gfx/layers/ipc/LayerTransaction.ipdlh @@ +326,5 @@ > + > +struct OpUpdateTexture { > + PCompositable compositable; > + uint64_t textureID; > + MaybeRegion region; Can this just be a region that's the size of the texture? Does not having it nullable make things simpler? ::: modules/libpref/src/init/all.js @@ +4066,5 @@ > pref("layers.offmainthreadcomposition.testing.enabled", false); > // Whether to animate simple opacity and transforms on the compositor > pref("layers.offmainthreadcomposition.async-animations", false); > +// Whether to prefer normal memory over shared memory. Ignored with cross-process compositing > +pref("layers.disable-shmem", false); Why isn't this prefer-memory-over-shmem? And it should be set to true to match the current behaviour.
Attachment #776494 - Flags: feedback?(matt.woodrow)
Attachment #776494 - Flags: feedback+
Comment on attachment 776494 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend Review of attachment 776494 [details] [diff] [review]: ----------------------------------------------------------------- Damn, that is a lot of stuff - nical++ We need to talk about landing this and the d3d9 compositor work. That is mostly ready to go, but I think interacts heavily with this stuff. Please have a look at the patches on bug 874721 and let me know how badly you think we are going to stomp each other. ::: gfx/layers/Compositor.cpp @@ +4,5 @@ > * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ > > #include "mozilla/layers/Compositor.h" > #include "mozilla/layers/Effects.h" > +#include "mozilla/layers/TextureHost.h" not needed? ::: gfx/layers/Compositor.h @@ +189,5 @@ > + * Return true if the effect type is supported. > + * > + * By default Compositor implementations should support all effects but in > + * some rare cases it is not possible to support an effect efficiently. > + * this is the case for BasicCompositor with EffetcYcBcR. typo ::: gfx/layers/CompositorTypes.h @@ +42,5 @@ > +// > +// For example, if the GraphicBuffer has an Android pixel format of > +// PIXEL_FORMAT_RGBA_8888 and isRBSwapped is true, when it is sampled > +// (for example, with GL), a BGRA shader should be used. > +const TextureFlags TEXTURE_RB_SWAPED = 1 << 6; Pretty sure one of the b2g guys (pchang, I think) recently added something similar. @@ +49,5 @@ > +const TextureFlags TEXTURE_FRONT = 1 << 10; > +// A texture host on white for component alpha > +const TextureFlags TEXTURE_ON_WHITE = 1 << 11; > + // A texture host on black for component alpha > +const TextureFlags TEXTURE_ON_BLACK = 1 << 12; I have a bad feeling about these flags - these things seem like properties of how the texture is used rather than of the texture itself, so they should be defined somewhere else, but I haven't got to how they are used yet, so this could be way off. @@ +59,5 @@ > +// Who is responsible for deallocating the shared data. > +// if none of the following two flags is set, the shared data will not be > +// deallocated by the layers system. It is not necessarily a leak, it could > +// be a choice from another part of gecko that wants to keep the data alive > +// flor some reason. The default behaviour is to deallocate on the host side. The first part of the comment makes it sound like the default behaviour is not to deallocate. The last line contradicts this. Also a type 'flor' ::: gfx/layers/ImageContainer.h @@ +70,5 @@ > > public: > virtual ~Image() {} > > + virtual TextureClient* AsTextureClient() { return nullptr; } I don't think I like Images knowing about TextureClients, seems like we're leaking across abstraction layers. ::: gfx/layers/ImageDataSerializer.cpp @@ +23,5 @@ > +// +-------------------+ ------+ > +// | | > +// | data | > +// | | > +// +-------------------+ Why don't we use a struct that embodies this structure? ::: gfx/layers/ImageDataSerializer.h @@ +22,5 @@ > + > +namespace mozilla { > +namespace layers { > + > +class ImageDataSerializerBase What is the motivation for having these in separate classes rather than just as methods in Image or somewhere? Should probably explain that in a comment here. ::: gfx/layers/basic/BasicCompositor.h @@ +37,5 @@ > > virtual void Destroy() MOZ_OVERRIDE; > > + virtual TemporaryRef<DataTextureSource> > + CreateDataTextureSource(TextureFlags aFlags = 0) MOZ_OVERRIDE { return nullptr; } Is this going to change in the future? Should have a TODO comment maybe. If we are going to leave it like this, it would be better to have MOZ_CRASH or change the name to MaybeCreatedData... ::: gfx/layers/client/ClientLayerManager.cpp @@ +316,5 @@ > + case EditReply::TReplyTextureRemoved: { > + // XXX - to manage reuse of gralloc buffers, we'll need to add some > + // glue code here to find the TextureClient and invoke a callback to > + // let the camera know that the gralloc buffer is not used anymore on > + // the compositor side and that it can reuse it. Is this why we need a reply for removing textures? You should add that explanation to the IPDL files ::: gfx/layers/client/CompositableClient.cpp @@ +146,5 @@ > return result.forget(); > } > > +TemporaryRef<BufferTextureClient> > +CompositableClient::CreateBufferTextureClient(gfx::SurfaceFormat aFormat) I would prefer to have this as part of CreateTextureClient, if possible @@ +148,5 @@ > > +TemporaryRef<BufferTextureClient> > +CompositableClient::CreateBufferTextureClient(gfx::SurfaceFormat aFormat) > +{ > + if (gfxPlatform::GetPlatform()->PreferMemoryOverShmem()) { I think this is wrong, whether we want memory or shmem might be different from one call to the next, not just a static platform wide thing. @@ +181,5 @@ > +void > +CompositableClient::OnTransaction() > +{ > + for (unsigned i = 0; i < mTexturesToRemove.size(); ++i) { > + mForwarder->RemoveTexture(this, mTexturesToRemove[i]); why not just do this in RemoveTextureClient? The forwarder is not going to do anything until the transaction anyway. ::: gfx/layers/client/CompositableClient.h @@ +130,5 @@ > + > + /** > + * A hook for the when the Compositable is detached from it's layer. > + */ > + virtual void Detach() {} Should this be HasBeenDetached() or something? @@ +136,3 @@ > protected: > + // The textures to destroy in the next transaction; > + std::vector<uint64_t> mTexturesToRemove; Why do we have to do this? ::: gfx/layers/client/ImageClient.cpp @@ +42,5 @@ > + if (gfxPlatform::GetPlatform()->UseDeprecatedTextures()) { > + result = new DeprecatedImageClientSingle(aForwarder, aFlags, BUFFER_IMAGE_BUFFERED); > + } else { > + // ImageBuffered is for async-video only, which does not need buffering > + // with new textures this comment does not make sense to me @@ +86,5 @@ > + if (mLastPaintedImageSerial == image->GetSerial()) { > + return true; > + } > + > + if (image->AsTextureClient()) { These if .. elif .. elif .. else blocks that have re-emerged in places like this make me extremely sad. For me one of the big goals of the refactoring was to get rid of them, but they are creeping back in. The blocks are all similar and the differences are due to using a different kind of texture. I think that any texture-specific code should be in the texture clients and the only if else etc. blocks should be when we create a texture client/host or check that we have the correct kind of client/host (and actually the latter case should probably be in the texture). If we can't do this because we are presenting different interfaces, then I think that is an indication that the choice of interfaces is broken and need re-working. @@ +132,5 @@ > + } else { > + nsRefPtr<gfxASurface> surface = image->GetAsSurface(); > + MOZ_ASSERT(surface); > + if (!mFrontBuffer) { > + gfxASurface::gfxImageFormat format please factor out these front buffer creation blocks ::: gfx/layers/composite/CompositableHost.cpp @@ +81,5 @@ > + it->SetCompositor(aCompositor); > + it = it->GetNextSibling(); > + } > +} > + Why do texture hosts like this? I see that we get a bit more code sharing between compositables, but it seems to come at the expense of safety. It seems pretty easy to use textures for the wrong thing. From a software design 101 perspective, we should only have a list of things if each thing is treated in the same way. That is not the case here, for example a front buffer must not be treated in the same way as a back buffer, likewise for the Y plain vs the Cb plain or whatever. The only place it really makes sense is for tiling, and that uses a pretty different setup anyway. I think conceptually, we have a map from the use for a texture host to the texture host and representing that map as a list loses information. Therefore, I strongly prefer the existing situation where the compositable decides how to store its textures. Though I agree with the aim of sharing some more code here, possibly by providing more helper methods in CompositableHost. ::: gfx/layers/composite/CompositableHost.h @@ +190,5 @@ > virtual void Dump(FILE* aFile=NULL, > const char* aPrefix="", > bool aDumpHtml=false) { } > static void DumpDeprecatedTextureHost(FILE* aFile, DeprecatedTextureHost* aTexture); > + static void DumpTextureHost(FILE* aFile, TextureHost* aTexture); Should all these Dump methods be DEBUG only? ::: gfx/layers/composite/ImageHost.cpp @@ +60,5 @@ > + aFilter); > + aEffectChain.mPrimaryEffect = effect; > + TileIterator* it = source->AsTileIterator(); > + if (it) { > + it->BeginTileIteration(); what if we have a picture rect and tiles? ::: gfx/layers/composite/ImageHost.h @@ +38,5 @@ > > +/** > + * Single-buffered ImageHost. > + */ > +class Image_Host_Single : public ImageHost Don't use underscores ::: gfx/layers/composite/TextureHost.h @@ +171,5 @@ > + RefPtr<NewTextureSource> mNextSibling; > +}; > + > +/** > + * Interface for TextureSources that can be updated from a DataSourceSurface. I'm not sure what a texture source is for any more. They used to be just a view on a texture host or render target that could be used as the source for texturing. Now they have an Update function and siblings and so forth. The responsibilites between texture hosts and texture sources seem to be getting blurred. Do you have a strong story to tell for the different abstractions? @@ +240,5 @@ > + * opposed to TextureSource that corresponds to device textures. > + * This means that for YCbCr planes, even though they are represented as > + * 3 textures internally (3 TextureSources), we have use 1 TextureHost and not 3, > + * because the 3 planes are stored in the same buffer of shared memory, before > + * they are uploaded separately. Hmm, so this goes some way to answering my question above. I think it makes sense. But I wonder if there is still the need for the old abstraction of a TextureView or something? Also TextureHost/Clients are now really badly named (oh dear, we are getting back into the naming debate again). I propose: TextureClient/Host -> BufferClient/Host NewTextureSource -> Texture TextureSource (old abstraction) -> TextureView. But, more importantly, please check that all the old comments are updated. @@ +249,5 @@ > + * matches the lifetime of the TextureClient/Host pair. It also means > + * TextureClient/Host do not implement double buffering, which is the > + * reponsibility of the compositable (which would use two Texture pairs). > + * > + * The Lock/Unlock mecanism here mirrors Lock/Unlock in TextureClient. You have a lot of grammar spelling mistakes in the comments, please give them a proof reading :-) ::: gfx/layers/ipc/LayerTransaction.ipdlh @@ +313,5 @@ > +/** > + * Tells the compositor-side which texture to use (for example, as front buffer > + * if there is several textures for double buffering) > + */ > +struct OpUseTexture { Why do we need a separate message for this? Why not combine with Add? I don't even see any code that uses it. I much prefer to have methods/ops that take the required number of textures, makes it much harder to make a mistake. ::: gfx/layers/ipc/SharedRGBImage.h @@ +82,5 @@ > > + > +/** > + * Stores RGB data in shared memory > + * It is assumed that the image width and stride are equal This seems a bold assumption ::: gfx/layers/opengl/CompositingRenderTargetOGL.h @@ +151,5 @@ > } > > + gfx::SurfaceFormat GetFormat() const MOZ_OVERRIDE > + { > + // XXX - Should it be implemented ? is the above assert true ? I'm pretty sure that assert is wrong. The whole point of a render target is that it can be used as a texture source, no? How else do we do container layers with intermediate surfaces? ::: gfx/layers/opengl/TextureHostOGL.cpp @@ +144,5 @@ > + nsIntSize size = ThebesIntSize(aSurface->GetSize()); > + if (!mTexImage || > + mTexImage->GetSize() != size || > + mTexImage->GetContentType() != gfx::ContentForFormat(aSurface->GetFormat())) { > + if (mAllowTiling) { Why do texture sources know about tiling? ::: gfx/layers/opengl/TextureHostOGL.h @@ +181,5 @@ > + gl::GLContext* mGL; > + bool mAllowTiling; > + bool mIterating; > +}; > + Why do both texture sources and texture hosts have to be specialised? At the moment, only hosts are specialised aren't they? ::: gfx/thebes/gfxPlatform.cpp @@ +254,5 @@ > mGraphiteShapingEnabled = UNINITIALIZED_VALUE; > mOpenTypeSVGEnabled = UNINITIALIZED_VALUE; > mBidiNumeralOption = UNINITIALIZED_VALUE; > > + if (XRE_GetProcessType() == GeckoProcessType_Default && I don't think this does what you want it to, even in multi-process land the main process will be GeckoProcessType_Default, so this will always be true, right? I added process IDs to the texture factory identifier struct for the d3d9 compositor work. You might be able to reuse that when you use this pref to check if we are multi-process or multi-thread. ::: modules/libpref/src/init/all.js @@ +4066,5 @@ > pref("layers.offmainthreadcomposition.testing.enabled", false); > // Whether to animate simple opacity and transforms on the compositor > pref("layers.offmainthreadcomposition.async-animations", false); > +// Whether to prefer normal memory over shared memory. Ignored with cross-process compositing > +pref("layers.disable-shmem", false); Given that using shmem in preference to normal memory in a single process is a bit silly, there should probably be a force-shmem pref and the default is false. Or maybe we could just not have a pref at all.
Attachment #776494 - Flags: feedback?(ncameron) → feedback+
(In reply to Jeff Muizelaar [:jrmuizel] from comment #50) > Comment on attachment 776494 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend > > Review of attachment 776494 [details] [diff] [review]: > ----------------------------------------------------------------- > > There was some random stuff we found while looking through. We have some > other questions that we can talk about in the meeting tomorrow: > > Why a linked list of TextureSources? old textures have a very quirky "composite" (as in design pattern composite) TextureSource, where if an effect needs several textures, it asks its only TextureSource for sub-textures. I much prefer giving the effect a list of textures (what the new textures do) so that you don't get a fake texture source that is just a container for 3 sub-sources. I would like to do something like: struct TextureList { RefPtr<TextureSource> mFirstTexturel TextureSource* get(uint32_t index) { ... } }; And give the effect a texture list. But to do it properly I need to get rid of the old textures, so this sugar should come as a followup. > > Why treat YCbCr as three buffers? I am not sure I understand the question. YCbCr is treated as 3 buffers because it's the efficient way to convert it on the gpu. And because TextureSource is meant to be a very thin abstraction over one texture object. > > GetSubSource() seems like an array implemented as a linked list GetSubSource is a leftover from the old textures that will desappear along with old textures. > > Is doing the YCbCr stuff together the right thing? > Do you mean storing them in one buffer? Yes because we run out of file descriptors quickly otherwise. At some point I even meant to place several YCbCr images in the same buffer to further reduce file descriptor consumption, but it turned out just having the 3 buffers in one buffer was enough for us to play 30 videos simultaneously without crashing so I didn't go for the several images per buffer thing. > How do we avoid racing during update? either we use single buffering with a TextureClient/Host pair that implements locking properly, or we do double/triple buffering and make sure client never writes in the buffer that is being composited (that would be the current behavior in most cases) > > Why do we store the buffer info in the shared memory? We have done that for as long as I can remember. I suppose we could change that but the current way works well and I can think of many things to improve so I wouldn't rush on fixing that one first. > @@ +138,5 @@ > > + if (!Allocate(bufSize)) { > > + return false; > > + } > > + ImageDataSerializer serializer(GetBuffer()); > > + serializer.InitializeBufferInfo(aSize, mFormat); > > Rather than constructing two ImageDataSerializers (one here, and one in the > previous function) for different purposes, how about storing aSize as mSize, > and doing both operations in the previous function. > > Same thing for the YCbCr version. > Note that creating Serializers on the stack is dirt cheap. > ::: gfx/layers/composite/ImageHost.cpp > @@ +40,5 @@ > > + const nsIntRegion* aVisibleRegion, > > + TiledLayerProperties* aLayerProperties) > > +{ > > + if (!GetCompositor()) { > > + // should only happen during tabswitch if async-video is still sending frames. > > I understand why the compositor goes away during tabswitch. I think I meant drag and drop to another window. fixed the comment. > > ::: gfx/layers/composite/TextureHost.cpp > @@ +94,5 @@ > > + case LAYERS_BASIC: > > + return CreateBackendIndependentTextureHost(aID, > > + aDesc, > > + aDeallocator, > > + aFlags); > > Is this used for more backends than LAYERS_BASIC? It is used for all backends. > @@ +296,5 @@ > > + * This can trigger texture uploads, so do not call it inside transactions > > + * so as to not upload textures while the main thread is blocked. > > + * Must not be called while this TextureHost is not sucessfully Locked. > > + */ > > + virtual NewTextureSource* GetTextureSource() = 0; > > Matt thinks this should be called GetFirstTextureSource(). Unless you read > the comment it's not clear that this returns a list. I would prefer GetTextureSources() or GetTextureSourceList() actually. > > ::: gfx/layers/ipc/LayerTransaction.ipdlh > @@ +326,5 @@ > > + > > +struct OpUpdateTexture { > > + PCompositable compositable; > > + uint64_t textureID; > > + MaybeRegion region; > > Can this just be a region that's the size of the texture? Does not having it > nullable make things simpler? It's simpler with it being nullable because the region is only used for IncrementalContentHost and and not all textures support partial updates. Right now we can easily assert when we don't support partial updates.
(In reply to Nick Cameron [:nrc] from comment #51) > Comment on attachment 776494 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend > > Review of attachment 776494 [details] [diff] [review]: > ----------------------------------------------------------------- > > Damn, that is a lot of stuff - nical++ > > We need to talk about landing this and the d3d9 compositor work. That is > mostly ready to go, but I think interacts heavily with this stuff. Please > have a look at the patches on bug 874721 and let me know how badly you think > we are going to stomp each other. I'll have a look. I am hoping that this since the old textures are still around, there will be some work to port the d3d9 stuff to the new textures but that landing d3d9 should not break the new textures and vis-versa. d3d9 will only work with the deprecated textures pref on until we port it to the new textures. > > @@ +49,5 @@ > > +const TextureFlags TEXTURE_FRONT = 1 << 10; > > +// A texture host on white for component alpha > > +const TextureFlags TEXTURE_ON_WHITE = 1 << 11; > > + // A texture host on black for component alpha > > +const TextureFlags TEXTURE_ON_BLACK = 1 << 12; > > I have a bad feeling about these flags - these things seem like properties > of how the texture is used rather than of the texture itself, so they should > be defined somewhere else, but I haven't got to how they are used yet, so > this could be way off. At first I had those flags in different members (TexturePurpose, RemovalFlags, TextureFlags) and it was confusing and error prone so I ended up storing them in the same member. Some of them are important for the texture to know what to do and others are important for the compositable to use the texture properly. In this way it's not the best, but they are all about the texture, so they would need to be stored in the texture anyway. > ::: gfx/layers/ImageContainer.h > @@ +70,5 @@ > > > > public: > > virtual ~Image() {} > > > > + virtual TextureClient* AsTextureClient() { return nullptr; } > > I don't think I like Images knowing about TextureClients, seems like we're > leaking across abstraction layers. I disagree. Image itself is an abstraction of the layers system that leaks into the media code in that sense (and has been designed to be the interface between media and layers). TextureClient is very similar to images in some ways, and the more we optimize layers, the more we will want other parts of gecko to paint directly into TextureClients. So if there is one abstraction that I want to see leaked in the users of layers, it is definitely TextureClient. > > ::: gfx/layers/ImageDataSerializer.cpp > @@ +23,5 @@ > > +// +-------------------+ ------+ > > +// | | > > +// | data | > > +// | | > > +// +-------------------+ > > Why don't we use a struct that embodies this structure? There is a struct for the metadata, I don't understand the quesion. > > +namespace mozilla { > > +namespace layers { > > + > > +class ImageDataSerializerBase > > What is the motivation for having these in separate classes rather than just > as methods in Image or somewhere? Should probably explain that in a comment > here. I see it the other way around: What is the motivation to place generic code into a method of a specific class? Plus, image is only used on the content side, while serialization, deserialization is an ipc thing. > > ::: gfx/layers/basic/BasicCompositor.h > @@ +37,5 @@ > > > > virtual void Destroy() MOZ_OVERRIDE; > > > > + virtual TemporaryRef<DataTextureSource> > > + CreateDataTextureSource(TextureFlags aFlags = 0) MOZ_OVERRIDE { return nullptr; } > > Is this going to change in the future? Should have a TODO comment maybe. If > we are going to leave it like this, it would be better to have MOZ_CRASH or > change the name to MaybeCreatedData... It was pure virtual before I separated this into several patches. I'll add an assert. > ::: gfx/layers/client/CompositableClient.cpp > @@ +146,5 @@ > > return result.forget(); > > } > > > > +TemporaryRef<BufferTextureClient> > > +CompositableClient::CreateBufferTextureClient(gfx::SurfaceFormat aFormat) > > I would prefer to have this as part of CreateTextureClient, if possible This returns a BufferTextureClient which has a superset of the TextureClient API. We should avoid having code that calls in the very generic factory methods and then unsafely casts to the type of class they expect. The callers of this method need a BufferTextureClient so they should not call a method that may return something that is not a BufferTextureClient. > > @@ +148,5 @@ > > > > +TemporaryRef<BufferTextureClient> > > +CompositableClient::CreateBufferTextureClient(gfx::SurfaceFormat aFormat) > > +{ > > + if (gfxPlatform::GetPlatform()->PreferMemoryOverShmem()) { > > I think this is wrong, whether we want memory or shmem might be different > from one call to the next, not just a static platform wide thing. > > @@ +181,5 @@ > > +void > > +CompositableClient::OnTransaction() > > +{ > > + for (unsigned i = 0; i < mTexturesToRemove.size(); ++i) { > > + mForwarder->RemoveTexture(this, mTexturesToRemove[i]); > > why not just do this in RemoveTextureClient? The forwarder is not going to > do anything until the transaction anyway. There are cases where a TextureClient is removed outside of a transaction so I had to add this deferred mechanism. I am not super happy with this but I have plans to change it in another bug (when we will make it possible for compositables to shared TextureClients) > > ::: gfx/layers/client/ImageClient.cpp > @@ +42,5 @@ > > + if (gfxPlatform::GetPlatform()->UseDeprecatedTextures()) { > > + result = new DeprecatedImageClientSingle(aForwarder, aFlags, BUFFER_IMAGE_BUFFERED); > > + } else { > > + // ImageBuffered is for async-video only, which does not need buffering > > + // with new textures > > this comment does not make sense to me the only reason we used a buffered ImageClient is because with async-video we need to retain a frame on the compositor side so that we can reupload it when the layer manager is destroyed and rebuild (for instance whenever we click on the title bar) to avoid the video area to flash black frames (since the video receives frames asynchronously with respect to the management of the layer tree). With new textures there is no need to retain an extra texture anymore because the shared data is always accessible by the compositor side. So we don't need ImageClientBuffered anymore and in there we just return ImageClientSingle. When we remove the old textures we'll remove the BUFFER_IMAGE_BUFFERED token and it will be clearer. I updated the comment. > > @@ +86,5 @@ > > + if (mLastPaintedImageSerial == image->GetSerial()) { > > + return true; > > + } > > + > > + if (image->AsTextureClient()) { > > These if .. elif .. elif .. else blocks that have re-emerged in places like > this make me extremely sad. For me one of the big goals of the refactoring > was to get rid of them, but they are creeping back in. The blocks are all > similar and the differences are due to using a different kind of texture. I > think that any texture-specific code should be in the texture clients and > the only if else etc. blocks should be when we create a texture client/host > or check that we have the correct kind of client/host (and actually the > latter case should probably be in the texture). If we can't do this because > we are presenting different interfaces, then I think that is an indication > that the choice of interfaces is broken and need re-working. Note that the the number of if .. else if blocks have reduced thanks to AsTextureClient() so the situation (although not fixed) is strictly better. We need two cases: the "already shared" case and the "you need to copy in a TextureClient" case The later will probably require that we add a more unified abstraction for layer input that is not already a texture client. Anyway, I agree with you on this but it's not in the scope of this bug (which already improves things). > ::: gfx/layers/composite/CompositableHost.cpp > @@ +81,5 @@ > > + it->SetCompositor(aCompositor); > > + it = it->GetNextSibling(); > > + } > > +} > > + > > Why do texture hosts like this? I see that we get a bit more code sharing > between compositables, but it seems to come at the expense of safety. It > seems pretty easy to use textures for the wrong thing. From a software > design 101 perspective, we should only have a list of things if each thing > is treated in the same way. That is not the case here, for example a front > buffer must not be treated in the same way as a back buffer, likewise for > the Y plain vs the Cb plain or whatever. The only place it really makes > sense is for tiling, and that uses a pretty different setup anyway. I think > conceptually, we have a map from the use for a texture host to the texture > host and representing that map as a list loses information. Therefore, I > strongly prefer the existing situation where the compositable decides how to > store its textures. Though I agree with the aim of sharing some more code > here, possibly by providing more helper methods in CompositableHost. Yeah, I hoped to gain more from doing this that we actually gain now, I'll revert to having each compositable host manage how to store it's textures, but it means we'll have to be careful about a few things that all compositables are expected to forward to their textures in some places. IMO it will be much more probe to errors (due to forgetting to implement some stuff in compositable implementations) but will be less confusing. Maybe the best would be to expose a way to iterate over a compositable's textures and all compositables would have to implement it so that the base class can still take care of those things where forwarding to the Textures is important. > > ::: gfx/layers/composite/CompositableHost.h > @@ +190,5 @@ > > virtual void Dump(FILE* aFile=NULL, > > const char* aPrefix="", > > bool aDumpHtml=false) { } > > static void DumpDeprecatedTextureHost(FILE* aFile, DeprecatedTextureHost* aTexture); > > + static void DumpTextureHost(FILE* aFile, TextureHost* aTexture); > > Should all these Dump methods be DEBUG only? the #ifdef soup is a bit annoying. We end up breaking some configurations without knowing (that's what happened with MOZ_DUMP_PAINTING a few months ago) or it forces you to rebuild if you change something that may affect the #ifdefed parts, so I'd rather document methods so that people do not use them for non-debug purposes and make them always available. > > ::: gfx/layers/composite/ImageHost.cpp > @@ +60,5 @@ > > + aFilter); > > + aEffectChain.mPrimaryEffect = effect; > > + TileIterator* it = source->AsTileIterator(); > > + if (it) { > > + it->BeginTileIteration(); > > what if we have a picture rect and tiles? Good question. We didn't support that before, so I didn't change it. We should investigate that, I'll add an assertion in the mean time. > > ::: gfx/layers/composite/ImageHost.h > @@ +38,5 @@ > > > > +/** > > + * Single-buffered ImageHost. > > + */ > > +class Image_Host_Single : public ImageHost > > Don't use underscores Yeah sorry, that was just to prevent it from being renamed in the sed scripts, I just forgot to rename it back. > > ::: gfx/layers/composite/TextureHost.h > @@ +171,5 @@ > > + RefPtr<NewTextureSource> mNextSibling; > > +}; > > + > > +/** > > + * Interface for TextureSources that can be updated from a DataSourceSurface. > > I'm not sure what a texture source is for any more. They used to be just a > view on a texture host or render target that could be used as the source for > texturing. Now they have an Update function and siblings and so forth. The > responsibilites between texture hosts and texture sources seem to be getting > blurred. Do you have a strong story to tell for the different abstractions? A TextureSource is a thin abstraction over a Texture that can be used by the compositor. They are not a view on the texture host anymore, because the TextureHost is an IPC thing. If we were to implement non-OMTC layers properly, we would use texture sources without texture hosts. Some of the TextureSource classes unfortunately need to be used with a specific texture host, but this is because the shared data and the device data is the same and we can't get around that without impacting performances. The update function is just for DataTextureSource, which is a Texture which can be updated from a DataSourceSurface. Note that the updated part is the device data, not the shared data. I think that splitting TextureHost and TextureSource into the thing that cares about shared data and the thing that cares about device data, we have cleaner abstractions with more precise responsibilities (and we share much more code, since for example no backend need to implement a YCbCrTextureHost anymore). The siblings thing is because the compositor should take lists of texture sources rather than the sub-source stuff as I said in my previous comment. I like intrusive linked lists because they are simple and efficient, others don't like this pattern because it leaks the container into the element's abstraction. Given that texture sources are used for one purpose only, it makes sense to me to have the idea of the list be visible in the class itself. > > @@ +240,5 @@ > > + * opposed to TextureSource that corresponds to device textures. > > + * This means that for YCbCr planes, even though they are represented as > > + * 3 textures internally (3 TextureSources), we have use 1 TextureHost and not 3, > > + * because the 3 planes are stored in the same buffer of shared memory, before > > + * they are uploaded separately. > > Hmm, so this goes some way to answering my question above. I think it makes > sense. But I wonder if there is still the need for the old abstraction of a > TextureView or something? Also TextureHost/Clients are now really badly > named (oh dear, we are getting back into the naming debate again). I propose: > > TextureClient/Host -> BufferClient/Host > NewTextureSource -> Texture > TextureSource (old abstraction) -> TextureView. > > But, more importantly, please check that all the old comments are updated. I agree with those names, and I strongly suggests that we postpone the naming debate to after this patch lands :) NewTextureSource and TextureSource will be merged after we remove the old textures so the name for NewTextureSource isn't very important as it is just temporary. > ::: gfx/layers/ipc/LayerTransaction.ipdlh > @@ +313,5 @@ > > +/** > > + * Tells the compositor-side which texture to use (for example, as front buffer > > + * if there is several textures for double buffering) > > + */ > > +struct OpUseTexture { > > Why do we need a separate message for this? Why not combine with Add? I > don't even see any code that uses it. I much prefer to have methods/ops that > take the required number of textures, makes it much harder to make a mistake. It is not used in ImageClient/Host, but the typical scenario for TextureClient/Host is to have a compositable client use two of them for double buffering and send OpUseTexture to tell the host side which one is now the front buffer. It just so happens that in the ImageClient case, we don't need double buffering so we never need to tell the compositor side which texture is to be presented. OpAddTexture creates a new TextureHost, which is different form swaping textures. > > ::: gfx/layers/ipc/SharedRGBImage.h > @@ +82,5 @@ > > > > + > > +/** > > + * Stores RGB data in shared memory > > + * It is assumed that the image width and stride are equal > > This seems a bold assumption Yes, the assumption was already present before this bug so I didn't change it. (Imagine how much internal fight happens inside me when I am refactoring things, want to fix the world but want to avoid the feature creep at the same time. If I hadn't resisted this would have been a much bigger patch :p) > > ::: gfx/layers/opengl/CompositingRenderTargetOGL.h > @@ +151,5 @@ > > } > > > > + gfx::SurfaceFormat GetFormat() const MOZ_OVERRIDE > > + { > > + // XXX - Should it be implemented ? is the above assert true ? > > I'm pretty sure that assert is wrong. The whole point of a render target is > that it can be used as a texture source, no? How else do we do container > layers with intermediate surfaces? I thought so too, let's fix this as a followup since it was already broken before. > > ::: gfx/layers/opengl/TextureHostOGL.cpp > @@ +144,5 @@ > > + nsIntSize size = ThebesIntSize(aSurface->GetSize()); > > + if (!mTexImage || > > + mTexImage->GetSize() != size || > > + mTexImage->GetContentType() != gfx::ContentForFormat(aSurface->GetFormat())) { > > + if (mAllowTiling) { > > Why do texture sources know about tiling? It's the tiling as in TextureImage tiling. We really need a different name to differentiate this form the tiled layer stuff. So it makes sense because the tiling here happens around the device texture, not the shared data, so it really belongs to TextureSource. > > ::: gfx/layers/opengl/TextureHostOGL.h > @@ +181,5 @@ > > + gl::GLContext* mGL; > > + bool mAllowTiling; > > + bool mIterating; > > +}; > > + > > Why do both texture sources and texture hosts have to be specialised? At the > moment, only hosts are specialised aren't they? TextureSources are now separated from TextureHost. This lets us use some TextureSource classes without a TextureHost (non-OMTC layers should eventually use that, and the new FPS counter that Vlad wrote uses that also), and it lets us share a lot of code, because some of the IPC stuff (TextureHost) is not backend-specific (ShmemTextureHost, MemoryTextureHost, including YCbCr) So by separating the TextureHost and Texture source we don't need to implement Shmem, Memory and YCbCr hosts for each backend. And that is pretty neat because for instance the basic backend now only needs to implement a DataTextureSource and just works. TextureSources are also specialized (if by that you mean have implementations).
(In reply to Jeff Muizelaar [:jrmuizel] from comment #50) > Comment on attachment 776494 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend > > Review of attachment 776494 [details] [diff] [review]: > ----------------------------------------------------------------- > Talk about Disconnect vs. Detach() can we disambiguate their > names more? Disconnect is an ipc thing for layers. It means Disconnect the ipc glue between client and host. Attach/Detach are between Compositables and Layers because the compositable is created separately from the layer and then attached to it. A compositable can outlive his layer (mostly happens with async-video) so compositables have this Detach method to clear out all potential references to the layer, the compositor GL context, etc. These references are set again when the compositable is attached to a new layer.
I'd like to come back on what I said about the list of TextureHost in CompositableHost. Removing it makes things more complicated than they need to be because the compositable is what keeps the TextureHost alive, and the API let's us add compositables and then decide which one to use as front buffer. So keeping a list of the added TextureHosts is necessary to keep added TextureHosts alive (they should not before until RemoveTextureHost is called). What CompositableHost implementations really need to care about is UseTextureHost() so that they can remember which TextureHost to use as front buffer/texture on white/texture on black/etc. So I am really for keeping this list managed by the base class, since its purpose is to manage the lifetime of TextureHosts, not to decide which one to use for compositing.
To confirm gralloc buffer rendering, it seems that 894891 needs to be fixed.
Blocks: 871364
Attachment #776530 - Flags: review?(jmuizelaar)
Attachment #772642 - Flags: review?(bas)
Attachment #776530 - Flags: review?(jmuizelaar) → review+
Comment on attachment 772642 [details] [diff] [review] Part 3 - CreateTexturedEffect changes Review of attachment 772642 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/Effects.h @@ +229,5 @@ > + * aSource as the (first) texture source. > + * > + * Note that aFormat can be different form aSource->GetFormat if, we are > + * creating an effect that takes several texture sources (like with YCBCR > + * where aFormat would be FOMRAT_YCBCR and each texture source would be We should use YCBCR everywhere. Let's fix it. @@ +289,5 @@ > > +/** > + * Create a textured effect based on aSource format. > + * > + * This version excudes the possibility of component alpha. nit: excludes @@ +292,5 @@ > + * > + * This version excudes the possibility of component alpha. > + */ > +inline TemporaryRef<TexturedEffect> > +CreateTexturedEffect(TextureSource *aTexture, Rename to aSource and be consistent :) @@ +295,5 @@ > +inline TemporaryRef<TexturedEffect> > +CreateTexturedEffect(TextureSource *aTexture, > + const gfx::Filter& aFilter) > +{ > + return CreateTexturedEffect(aTexture, nullptr, aFilter); Can't this just call the single input CreateTexturedEffect directly rather than going through the component alpha one? i.e. CreateTexturedEffect(aSource->GetFormat(), a Source, aFilter);
Attachment #772642 - Flags: review?(bas) → review+
Comment on attachment 776494 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend Review of attachment 776494 [details] [diff] [review]: ----------------------------------------------------------------- Here's the majority of my review comments, I'm not completely done yet but I don't want you to have to wait too long. When you reply please do so in the splinter review framework so I can see all your reply in context with other people's comments and such! Right now I haven't looked through your responses for every comment that you responded to because it's a pain to correlate them. In general I'm very happy with the changes! ::: gfx/gl/GLTextureImage.cpp @@ +37,5 @@ > + RefPtr<gfxASurface> thebesSurf > + = new gfxImageSurface(aSurface->GetData(), > + ThebesIntSize(aSurface->GetSize()), > + aSurface->Stride(), > + SurfaceFormatToImageFormat(aSurface->GetFormat())); Man, it sucks that we're converting into a gfxASurface here anyway, we shouldn't be adding new dependencies on it. But I suppose we can fix that later. ::: gfx/layers/CompositorTypes.h @@ +49,5 @@ > +const TextureFlags TEXTURE_FRONT = 1 << 10; > +// A texture host on white for component alpha > +const TextureFlags TEXTURE_ON_WHITE = 1 << 11; > + // A texture host on black for component alpha > +const TextureFlags TEXTURE_ON_BLACK = 1 << 12; I agree with Nick here. ::: gfx/layers/ImageContainer.h @@ +70,5 @@ > > public: > virtual ~Image() {} > > + virtual TextureClient* AsTextureClient() { return nullptr; } Agree with Nick here. ::: gfx/layers/ImageDataSerializer.cpp @@ +23,5 @@ > +// +-------------------+ ------+ > +// | | > +// | data | > +// | | > +// +-------------------+ I think because offset is set to be variable. I'm not sure why though since MOZ_ALIGN_WORD(sizeof(SurfaceBufferInfo)); is a known constant. We should fix that :). (fwiw the size of SurfaceBufferInfo -will- be word-aligned (even if you take offset out) since gfx::SurfaceFormat should be 4 bytes. If you have some platform where this might not be true just union the format with a uint32_t but I honestly doubt we support such a platform. We should also consider making it easier to switch alignment if we want to (Moz2D offers some useful stuff here, probably so does MFBT). @@ +106,5 @@ > + gfxIntSize size(info->width, info->height); > + RefPtr<gfxImageSurface> surf = > + new gfxImageSurface(GetData(), > + size, > + stride, nit: This might just be me but I'd like some more vertical compression here (i.e. as far as I'm concerned GetData(), size and stride go onto the same line here. @@ +121,5 @@ > + gfx::IntSize size(info->width, info->height); > + RefPtr<gfx::DataSourceSurface> surf = > + gfx::Factory::CreateWrappingDataSourceSurface(GetData(), > + stride, > + size, nit: and here ::: gfx/layers/ImageDataSerializer.h @@ +35,5 @@ > + TemporaryRef<gfxImageSurface> GetAsThebesSurface(); > + > +protected: > + ImageDataSerializerBase(uint8_t* aData) > + : mData(aData) {} nit: I'd prefer some vertical whitespace here. ::: gfx/layers/basic/BasicCompositor.h @@ +37,5 @@ > > virtual void Destroy() MOZ_OVERRIDE; > > + virtual TemporaryRef<DataTextureSource> > + CreateDataTextureSource(TextureFlags aFlags = 0) MOZ_OVERRIDE { return nullptr; } It feels to me like this should just be pure virtual and this is just stubbing it since it's not implemented everywhere yet, TODO comment and MOZ_CRASH seems like the way to go. ::: gfx/layers/client/ImageClient.cpp @@ +86,5 @@ > + if (mLastPaintedImageSerial == image->GetSerial()) { > + return true; > + } > + > + if (image->AsTextureClient()) { I wasn't aware one of the big goals of the refactoring was to get rid of them, I didn't really feel they were a major problem anyway? ;) In general I think there's a trade-off between have polymorphism in functions like this and having a ridiculous amount of abstraction levels. Or condensing too many abstraction levels into a big class. I'm not sure whether an ImageClientSingleYCbCr, ImageClientSingleTextureClient and ImageClientSingleSurface or something are really the answer here, especially if an ImageClient needs to be able to morph between types. In this case I can't think of a particularly good solution since image objects are LayerManager independent for all kinds of good reasons :). We could move this 'GenerateFrontBufferFromImage' or something into a separate function so the if elif lives there, but introducing some intermediate interface here seems like a poor choice to me. @@ +127,5 @@ > + GetForwarder()->UpdatedTexture(this, mFrontBuffer, nullptr); > + } else { > + MOZ_ASSERT(false); > + return false; > + } Some vertical white space here would be awesome! ::: gfx/layers/client/TextureClient.cpp @@ +121,5 @@ > + return false; > + } > + > + nsRefPtr<gfxContext> tmpCtx = new gfxContext(surf.get()); > + tmpCtx->SetOperator(gfxContext::OPERATOR_SOURCE); Moz2D has one, maybe we can use that? :-) ::: gfx/layers/composite/CompositableHost.cpp @@ +61,5 @@ > +TextureHost* > +CompositableHost::GetTextureHost(uint64_t aTextureID) > +{ > + RefPtr<TextureHost> it = mFirstTexture; > + while (it) { nit: If you're going to be clever and use !!it in one place, be consistent :-). ::: gfx/layers/composite/TextureHost.h @@ +155,5 @@ > + { > + return mNextSibling; > + } > + > + // temprary adapter to use the same SubSource API as the old TextureSource nit: temporary @@ +171,5 @@ > + RefPtr<NewTextureSource> mNextSibling; > +}; > + > +/** > + * Interface for TextureSources that can be updated from a DataSourceSurface. I agree with Nical's explanation.
(In reply to Bas Schouten (:bas.schouten) from comment #58) > Comment on attachment 776494 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend > > Review of attachment 776494 [details] [diff] [review]: > ----------------------------------------------------------------- > ::: gfx/layers/client/ImageClient.cpp > @@ +86,5 @@ > > + if (mLastPaintedImageSerial == image->GetSerial()) { > > + return true; > > + } > > + > > + if (image->AsTextureClient()) { > > I wasn't aware one of the big goals of the refactoring was to get rid of > them, I didn't really feel they were a major problem anyway? ;) In general I > think there's a trade-off between have polymorphism in functions like this > and having a ridiculous amount of abstraction levels. Or condensing too many > abstraction levels into a big class. I'm not sure whether an > ImageClientSingleYCbCr, ImageClientSingleTextureClient and > ImageClientSingleSurface or something are really the answer here, especially > if an ImageClient needs to be able to morph between types. In this case I > can't think of a particularly good solution since image objects are > LayerManager independent for all kinds of good reasons :). We could move > this 'GenerateFrontBufferFromImage' or something into a separate function so > the if elif lives there, but introducing some intermediate interface here > seems like a poor choice to me. I am not entirely sure which abstraction you criticize here, so I'll assume you are talking about Image using and exposing a TextureClient. The following rather long comment goes a bit beyond your comments but I have the feeling we are not all on the same page about the rôle of texture client and the problems that this refactoring is trying to fix. Having shared Image classes manipulate texture clients is something we really really want. Media code, screenshots and other parts of Gecko keep accessing the shared buffer while we composite it or when we could be writing to it in, say, the ImageBridge thread. The Image really needs to control the lifetime of its TextureClient and Lock/Unlock appropriately. This is definitely one of the big (if not the biggest) goal of this refactoring. Remember that Image is an abstraction that belongs to the layers systeme (it's not an abstraction of media), and TextureClient is the only safe way to access shared buffers, so I'll insist on this, exposing TextureClient to Image is not leaking the design of layers in Images. A good example of an abstraction that badly leaked everywhere is SurfaceDescriptor. It is supposed to be a serialization helper mostly for our IPDL glue, and it is all over the media code especially in B2G camera code. rather than exposing naked surface descriptors with no synchronization guarantee, we should expose either the right low-level buffer type (gralloc, etc.) when we know that the buffer has not been presented to layers yet, or a TextureClient if it is already in layers and potentially accessed by the entire world. If you look at what ImageClientSingle was doing before, it was: if (type == SHARED_FOO) { desc = casted_to_foo->GiveMeYourSurfaceDescriptor() sendTheDescriptor(foo) } else if (type == SHARED_BAR) { desc = casted_to_bar->GiveMeYourSurfaceDescriptor() sendTheDescriptor(foo) #ifdef MOZ_WIDGET_GONK } else if (type = SHARED_TYPE_THAT_DOES_THE_SAME_BUT_ONLY_FOR_B2G) { desc = casetd_to_b2g_thin->GiveMeYourSurfaceDescriptor() sendTheDescriptor(foo) #endif } else if ... // and so on for every image type } All shared image types where doing the same thing, and didn't shared an interface. There is nothing to gain from all of these if/else blocks, and worse it even forced us to #ifdef B2G in the middle of some platform independent code! So it just makes sense that since shared image type *have* to use texture clients for the reason I explained earlier, they should also expose it in a virtual method. once you expose a TextureClient to any compositable, the later knows exactly what to do with it the base class does). So reducing the if/else awkwardness is just a nice side-effect from forcing Images to use a safe abstractions. Again, TextureClient should be the interface between layers and code that wants to paint and expects the painted surface to be composited efficiently (without copying) and safely (with proper reference counting and insurance that the compositor will not kill or composite the surface while you are painting to it. TextureClient is what we should be exposing to stuff as much as we can. It is not yet-another-abstraction that sits in the middle, really. we could go further and remove the notion of Image. The only thing that Image does and TextureClient does not is offer a container for a surface that is not shared with the compositor. So if we don't like having too many abstractions, let's just add that feature to the TextureClient API and kill images completely (media code could manipulate a layers::TextureClient just as easily as manipulating a layers::Image. And it would manipulate something that the compositor knows how to work with. Plus, Image and it's shared flavors represent a feature that we partially duplicate on other layer types. A method like GenerateFrontBufferFromImage does not cover the needs of our media code which needs to share the same buffer to lots and lots of threads and processes for efficiency. When we finally get the time to implement it, we will even have some of the decoder libraries writing directly inside a TextureClient (zero copy until texture upload \o/). This subject is really at the core of this refactoring, so it is important that we all understand it the same way and take the constraints of the video stack into account. Let's not hesitate to schedule a meeting about it if discussing it in a more synchronous fashion can help.
Attached patch Part 4 - Texture refactoring + OGL backend (obsolete) (deleted) — Splinter Review
If you guys don't find serious issues, I think this is landable (it is preffed off by default so it should not affect the current code paths for now).
Attachment #776494 - Attachment is obsolete: true
Attachment #776494 - Flags: feedback?(bgirard)
Attachment #776494 - Flags: feedback?(bas)
Attachment #779423 - Flags: review?(ncameron)
Attachment #779423 - Flags: review?(bas)
I think it's landable too :) Getting this landed and enabled is blocking a few performance bugs which affect OMTC on OSX. I'd really like to get this finish ASAP. Quick reviews would be appreciated.
(In reply to Nicolas Silva [:nical] from comment #55) > I'd like to come back on what I said about the list of TextureHost in > CompositableHost. > > Removing it makes things more complicated than they need to be because the > compositable is what keeps the TextureHost alive, and the API let's us add > compositables and then decide which one to use as front buffer. So keeping a > list of the added TextureHosts is necessary to keep added TextureHosts alive > (they should not before until RemoveTextureHost is called). > > What CompositableHost implementations really need to care about is > UseTextureHost() so that they can remember which TextureHost to use as front > buffer/texture on white/texture on black/etc. > > So I am really for keeping this list managed by the base class, since its > purpose is to manage the lifetime of TextureHosts, not to decide which one > to use for compositing. I don't see this. I see that the compositable needs to keep references to the texture hosts in order to keep them alive, but I don't see why that requires a linked list. It could equally be done by having an array, or a map, or just a whole bunch of named references. I think having a list makes things more complex because it adds a layer of indirection. Also there is the grey area of a texture which is on the list but is not used for anything - this state doesn't seem to fulfil and need of the model, it is an exposed implementation detail.
(In reply to Nicolas Silva [:nical] from comment #53) > (In reply to Nick Cameron [:nrc] from comment #51) > > Comment on attachment 776494 [details] [diff] [review] > > Part 4 - Texture refactoring + OGL backend > > > > Review of attachment 776494 [details] [diff] [review]: > > ----------------------------------------------------------------- > > > > Damn, that is a lot of stuff - nical++ > > > > We need to talk about landing this and the d3d9 compositor work. That is > > mostly ready to go, but I think interacts heavily with this stuff. Please > > have a look at the patches on bug 874721 and let me know how badly you think > > we are going to stomp each other. > > I'll have a look. I am hoping that this since the old textures are still > around, there will be some work to port the d3d9 stuff to the new textures > but that landing d3d9 should not break the new textures and vis-versa. d3d9 > will only work with the deprecated textures pref on until we port it to the > new textures. > > > > > @@ +49,5 @@ > > > +const TextureFlags TEXTURE_FRONT = 1 << 10; > > > +// A texture host on white for component alpha > > > +const TextureFlags TEXTURE_ON_WHITE = 1 << 11; > > > + // A texture host on black for component alpha > > > +const TextureFlags TEXTURE_ON_BLACK = 1 << 12; > > > > I have a bad feeling about these flags - these things seem like properties > > of how the texture is used rather than of the texture itself, so they should > > be defined somewhere else, but I haven't got to how they are used yet, so > > this could be way off. > > At first I had those flags in different members (TexturePurpose, > RemovalFlags, TextureFlags) and it was confusing and error prone so I ended > up storing them in the same member. > > Some of them are important for the texture to know what to do and others are > important for the compositable to use the texture properly. In this way it's > not the best, but they are all about the texture, so they would need to be > stored in the texture anyway. ok. I guess this will be cleared once content client/hosts are implemented. > > > ::: gfx/layers/ImageContainer.h > > @@ +70,5 @@ > > > > > > public: > > > virtual ~Image() {} > > > > > > + virtual TextureClient* AsTextureClient() { return nullptr; } > > > > I don't think I like Images knowing about TextureClients, seems like we're > > leaking across abstraction layers. > > I disagree. Image itself is an abstraction of the layers system that leaks > into the media code in that sense (and has been designed to be the interface > between media and layers). > TextureClient is very similar to images in some ways, and the more we > optimize layers, the more we will want other parts of gecko to paint > directly into TextureClients. So if there is one abstraction that I want to > see leaked in the users of layers, it is definitely TextureClient. > Hmmm, that is interesting. I don't think I would want anything outside of the compositor knowing about TextureClients, never mind outside of layers, it just feels like too low-level an abstraction. I guess if Image is the abstraction we present to the outside world and is the gatekeeper between the abstract and the concrete, then it must know about TextureClients. But I would like to separate the interface of Image we present to the rest of Gecko and the interface we present to layers. I disagree that optimisation will require us to expose the inner workings of layers. We should strive to structure our API so that it can be fast without doing so and not make that sacrifice until we have to. Anyway, this is all very philosophical and doesn't really help with what to do with Image. Is it possible to separate out an interface to present to the world that doesn't know about TextureClients? > > > > ::: gfx/layers/ImageDataSerializer.cpp > > @@ +23,5 @@ > > > +// +-------------------+ ------+ > > > +// | | > > > +// | data | > > > +// | | > > > +// +-------------------+ > > > > Why don't we use a struct that embodies this structure? > > There is a struct for the metadata, I don't understand the quesion. > Then why do we have to pass around a char* or whatever it is, rather than a pointer to that struct? And why do we have to explain the structure in a comment? > > > +namespace mozilla { > > > +namespace layers { > > > + > > > +class ImageDataSerializerBase > > > > What is the motivation for having these in separate classes rather than just > > as methods in Image or somewhere? Should probably explain that in a comment > > here. > > I see it the other way around: > What is the motivation to place generic code into a method of a specific > class? Then can't it be in the generic super class, or is there not one here? > Plus, image is only used on the content side, while serialization, > deserialization is an ipc thing. > OK, that makes sense. > > > > ::: gfx/layers/basic/BasicCompositor.h > > @@ +37,5 @@ > > > > > > virtual void Destroy() MOZ_OVERRIDE; > > > > > > + virtual TemporaryRef<DataTextureSource> > > > + CreateDataTextureSource(TextureFlags aFlags = 0) MOZ_OVERRIDE { return nullptr; } > > > > Is this going to change in the future? Should have a TODO comment maybe. If > > we are going to leave it like this, it would be better to have MOZ_CRASH or > > change the name to MaybeCreatedData... > > It was pure virtual before I separated this into several patches. I'll add > an assert. > ok > > ::: gfx/layers/client/CompositableClient.cpp > > @@ +146,5 @@ > > > return result.forget(); > > > } > > > > > > +TemporaryRef<BufferTextureClient> > > > +CompositableClient::CreateBufferTextureClient(gfx::SurfaceFormat aFormat) > > > > I would prefer to have this as part of CreateTextureClient, if possible > > This returns a BufferTextureClient which has a superset of the TextureClient > API. We should avoid having code that calls in the very generic factory > methods and then unsafely casts to the type of class they expect. I'm not 100% convinced that avoiding a relatively safe cast is worth the code dup and confusion of which method to use, but ok. > The callers of this method need a BufferTextureClient so they should not > call a method that may return something that is not a BufferTextureClient. > > > > > @@ +148,5 @@ > > > > > > +TemporaryRef<BufferTextureClient> > > > +CompositableClient::CreateBufferTextureClient(gfx::SurfaceFormat aFormat) > > > +{ > > > + if (gfxPlatform::GetPlatform()->PreferMemoryOverShmem()) { > > > > I think this is wrong, whether we want memory or shmem might be different > > from one call to the next, not just a static platform wide thing. > > > > @@ +181,5 @@ > > > +void > > > +CompositableClient::OnTransaction() > > > +{ > > > + for (unsigned i = 0; i < mTexturesToRemove.size(); ++i) { > > > + mForwarder->RemoveTexture(this, mTexturesToRemove[i]); > > > > why not just do this in RemoveTextureClient? The forwarder is not going to > > do anything until the transaction anyway. > > There are cases where a TextureClient is removed outside of a transaction so > I had to add this deferred mechanism. I am not super happy with this but I > have plans to change it in another bug (when we will make it possible for > compositables to shared TextureClients) > Ah, right. If this is happening outside of a transaction, could it have a better name than OnTransaction, that is confusing. > > > > ::: gfx/layers/client/ImageClient.cpp > > @@ +42,5 @@ > > > + if (gfxPlatform::GetPlatform()->UseDeprecatedTextures()) { > > > + result = new DeprecatedImageClientSingle(aForwarder, aFlags, BUFFER_IMAGE_BUFFERED); > > > + } else { > > > + // ImageBuffered is for async-video only, which does not need buffering > > > + // with new textures > > > > this comment does not make sense to me > > the only reason we used a buffered ImageClient is because with async-video > we need to retain a frame on the compositor side so that we can reupload it > when the layer manager is destroyed and rebuild (for instance whenever we > click on the title bar) to avoid the video area to flash black frames (since > the video receives frames asynchronously with respect to the management of > the layer tree). > With new textures there is no need to retain an extra texture anymore > because the shared data is always accessible by the compositor side. So we > don't need ImageClientBuffered anymore and in there we just return > ImageClientSingle. When we remove the old textures we'll remove the > BUFFER_IMAGE_BUFFERED token and it will be clearer. I updated the comment. > > > > > @@ +86,5 @@ > > > + if (mLastPaintedImageSerial == image->GetSerial()) { > > > + return true; > > > + } > > > + > > > + if (image->AsTextureClient()) { > > > > These if .. elif .. elif .. else blocks that have re-emerged in places like > > this make me extremely sad. For me one of the big goals of the refactoring > > was to get rid of them, but they are creeping back in. The blocks are all > > similar and the differences are due to using a different kind of texture. I > > think that any texture-specific code should be in the texture clients and > > the only if else etc. blocks should be when we create a texture client/host > > or check that we have the correct kind of client/host (and actually the > > latter case should probably be in the texture). If we can't do this because > > we are presenting different interfaces, then I think that is an indication > > that the choice of interfaces is broken and need re-working. > > Note that the the number of if .. else if blocks have reduced thanks to > AsTextureClient() so the situation (although not fixed) is strictly better. > > We need two cases: the "already shared" case and the "you need to copy in a > TextureClient" case > > The later will probably require that we add a more unified abstraction for > layer input that is not already a texture client. Anyway, I agree with you > on this but it's not in the scope of this bug (which already improves > things). Fair enough. > > > ::: gfx/layers/composite/CompositableHost.cpp > > @@ +81,5 @@ > > > + it->SetCompositor(aCompositor); > > > + it = it->GetNextSibling(); > > > + } > > > +} > > > + > > > > Why do texture hosts like this? I see that we get a bit more code sharing > > between compositables, but it seems to come at the expense of safety. It > > seems pretty easy to use textures for the wrong thing. From a software > > design 101 perspective, we should only have a list of things if each thing > > is treated in the same way. That is not the case here, for example a front > > buffer must not be treated in the same way as a back buffer, likewise for > > the Y plain vs the Cb plain or whatever. The only place it really makes > > sense is for tiling, and that uses a pretty different setup anyway. I think > > conceptually, we have a map from the use for a texture host to the texture > > host and representing that map as a list loses information. Therefore, I > > strongly prefer the existing situation where the compositable decides how to > > store its textures. Though I agree with the aim of sharing some more code > > here, possibly by providing more helper methods in CompositableHost. > > Yeah, I hoped to gain more from doing this that we actually gain now, I'll > revert to having each compositable host manage how to store it's textures, > but it means we'll have to be careful about a few things that all > compositables are expected to forward to their textures in some places. IMO > it will be much more probe to errors (due to forgetting to implement some > stuff in compositable implementations) but will be less confusing. > Maybe the best would be to expose a way to iterate over a compositable's > textures and all compositables would have to implement it so that the base > class can still take care of those things where forwarding to the Textures > is important. > I think it is reasonable to expect new subclasses to jump through some hoops in their implementation - authors of new subclasses can be expected to understand the superclass in depth. With regards to new clients of a class, we should be as defensive as possible. Authors of new clients will not understand our class in as much depth. Ideally both cases would be bullet proof, but where we have to make a choice (such as here) I prefer simplicity for clients at the expense of some complexity for new subclasses. > > > > ::: gfx/layers/composite/CompositableHost.h > > @@ +190,5 @@ > > > virtual void Dump(FILE* aFile=NULL, > > > const char* aPrefix="", > > > bool aDumpHtml=false) { } > > > static void DumpDeprecatedTextureHost(FILE* aFile, DeprecatedTextureHost* aTexture); > > > + static void DumpTextureHost(FILE* aFile, TextureHost* aTexture); > > > > Should all these Dump methods be DEBUG only? > > the #ifdef soup is a bit annoying. We end up breaking some configurations > without knowing (that's what happened with MOZ_DUMP_PAINTING a few months > ago) or it forces you to rebuild if you change something that may affect the > #ifdefed parts, so I'd rather document methods so that people do not use > them for non-debug purposes and make them always available. > Hmm, true, but there are advantages to debug only code - size being the obvious one, also guarantees that they won't be used in release builds, comments don't give hard guarantees. Seems like six of one and half a dozen of the other. But, I think there is a precedent in Mozilla code to use DEBUG ifdefs and we should stick to that unless there is a really strong motivation not too. > > > > ::: gfx/layers/composite/ImageHost.cpp > > @@ +60,5 @@ > > > + aFilter); > > > + aEffectChain.mPrimaryEffect = effect; > > > + TileIterator* it = source->AsTileIterator(); > > > + if (it) { > > > + it->BeginTileIteration(); > > > > what if we have a picture rect and tiles? > > Good question. We didn't support that before, so I didn't change it. > We should investigate that, I'll add an assertion in the mean time. > > > > > ::: gfx/layers/composite/ImageHost.h > > @@ +38,5 @@ > > > > > > +/** > > > + * Single-buffered ImageHost. > > > + */ > > > +class Image_Host_Single : public ImageHost > > > > Don't use underscores > Yeah sorry, that was just to prevent it from being renamed in the sed > scripts, I just forgot to rename it back. > > > > > ::: gfx/layers/composite/TextureHost.h > > @@ +171,5 @@ > > > + RefPtr<NewTextureSource> mNextSibling; > > > +}; > > > + > > > +/** > > > + * Interface for TextureSources that can be updated from a DataSourceSurface. > > > > I'm not sure what a texture source is for any more. They used to be just a > > view on a texture host or render target that could be used as the source for > > texturing. Now they have an Update function and siblings and so forth. The > > responsibilites between texture hosts and texture sources seem to be getting > > blurred. Do you have a strong story to tell for the different abstractions? > > A TextureSource is a thin abstraction over a Texture that can be used by the > compositor. They are not a view on the texture host anymore, because the > TextureHost is an IPC thing. If we were to implement non-OMTC layers > properly, we would use texture sources without texture hosts. Some of the > TextureSource classes unfortunately need to be used with a specific texture > host, but this is because the shared data and the device data is the same > and we can't get around that without impacting performances. > The update function is just for DataTextureSource, which is a Texture which > can be updated from a DataSourceSurface. Note that the updated part is the > device data, not the shared data. I think that splitting TextureHost and > TextureSource into the thing that cares about shared data and the thing that > cares about device data, we have cleaner abstractions with more precise > responsibilities (and we share much more code, since for example no backend > need to implement a YCbCrTextureHost anymore). > > The siblings thing is because the compositor should take lists of texture > sources rather than the sub-source stuff as I said in my previous comment. I > like intrusive linked lists because they are simple and efficient, others > don't like this pattern because it leaks the container into the element's > abstraction. Given that texture sources are used for one purpose only, it > makes sense to me to have the idea of the list be visible in the class > itself. > OK. If you haven't already, it would be great to have that or something like it in a comment. > > > > @@ +240,5 @@ > > > + * opposed to TextureSource that corresponds to device textures. > > > + * This means that for YCbCr planes, even though they are represented as > > > + * 3 textures internally (3 TextureSources), we have use 1 TextureHost and not 3, > > > + * because the 3 planes are stored in the same buffer of shared memory, before > > > + * they are uploaded separately. > > > > Hmm, so this goes some way to answering my question above. I think it makes > > sense. But I wonder if there is still the need for the old abstraction of a > > TextureView or something? Also TextureHost/Clients are now really badly > > named (oh dear, we are getting back into the naming debate again). I propose: > > > > TextureClient/Host -> BufferClient/Host > > NewTextureSource -> Texture > > TextureSource (old abstraction) -> TextureView. > > > > But, more importantly, please check that all the old comments are updated. > > I agree with those names, and I strongly suggests that we postpone the > naming debate to after this patch lands :) OK. > NewTextureSource and TextureSource will be merged after we remove the old > textures so the name for NewTextureSource isn't very important as it is just > temporary. > > > ::: gfx/layers/ipc/LayerTransaction.ipdlh > > @@ +313,5 @@ > > > +/** > > > + * Tells the compositor-side which texture to use (for example, as front buffer > > > + * if there is several textures for double buffering) > > > + */ > > > +struct OpUseTexture { > > > > Why do we need a separate message for this? Why not combine with Add? I > > don't even see any code that uses it. I much prefer to have methods/ops that > > take the required number of textures, makes it much harder to make a mistake. > > It is not used in ImageClient/Host, but the typical scenario for > TextureClient/Host is to have a compositable client use two of them for > double buffering and send OpUseTexture to tell the host side which one is > now the front buffer. > It just so happens that in the ImageClient case, we don't need double > buffering so we never need to tell the compositor side which texture is to > be presented. > OpAddTexture creates a new TextureHost, which is different form swaping > textures. > So every frame we would have to call UseTexture to tell the compositor side which host to use for what? That seems fragile and too generic. I much prefer having more specific messages like 'Swap' since having the generic 'Use' message opens up all kinds of possible corrupt states (the same texture as both back and front, one of front or back updated but not the other, a new texture for one but an old texture for other, etc.). > > > > ::: gfx/layers/ipc/SharedRGBImage.h > > @@ +82,5 @@ > > > > > > + > > > +/** > > > + * Stores RGB data in shared memory > > > + * It is assumed that the image width and stride are equal > > > > This seems a bold assumption > > Yes, the assumption was already present before this bug so I didn't change > it. Heh, fair enough. > (Imagine how much internal fight happens inside me when I am refactoring > things, want to fix the world but want to avoid the feature creep at the > same time. If I hadn't resisted this would have been a much bigger patch :p) > > > > > ::: gfx/layers/opengl/CompositingRenderTargetOGL.h > > @@ +151,5 @@ > > > } > > > > > > + gfx::SurfaceFormat GetFormat() const MOZ_OVERRIDE > > > + { > > > + // XXX - Should it be implemented ? is the above assert true ? > > > > I'm pretty sure that assert is wrong. The whole point of a render target is > > that it can be used as a texture source, no? How else do we do container > > layers with intermediate surfaces? > > I thought so too, let's fix this as a followup since it was already broken > before. > > > > > ::: gfx/layers/opengl/TextureHostOGL.cpp > > @@ +144,5 @@ > > > + nsIntSize size = ThebesIntSize(aSurface->GetSize()); > > > + if (!mTexImage || > > > + mTexImage->GetSize() != size || > > > + mTexImage->GetContentType() != gfx::ContentForFormat(aSurface->GetFormat())) { > > > + if (mAllowTiling) { > > > > Why do texture sources know about tiling? > > It's the tiling as in TextureImage tiling. We really need a different name > to differentiate this form the tiled layer stuff. So it makes sense because > the tiling here happens around the device texture, not the shared data, so > it really belongs to TextureSource. > Currently even the TextureImage tiling is mostly taken care of in the CompositableHost. What is the motivation to move it down to the TextureSource? > > > > ::: gfx/layers/opengl/TextureHostOGL.h > > @@ +181,5 @@ > > > + gl::GLContext* mGL; > > > + bool mAllowTiling; > > > + bool mIterating; > > > +}; > > > + > > > > Why do both texture sources and texture hosts have to be specialised? At the > > moment, only hosts are specialised aren't they? > > TextureSources are now separated from TextureHost. This lets us use some > TextureSource classes without a TextureHost (non-OMTC layers should > eventually use that, and the new FPS counter that Vlad wrote uses that > also), and it lets us share a lot of code, because some of the IPC stuff > (TextureHost) is not backend-specific (ShmemTextureHost, MemoryTextureHost, > including YCbCr) So by separating the TextureHost and Texture source we > don't need to implement Shmem, Memory and YCbCr hosts for each backend. And > that is pretty neat because for instance the basic backend now only needs to > implement a DataTextureSource and just works. > TextureSources are also specialized (if by that you mean have > implementations). Makes sense. What checks do we have to prevent mis-match of texture hosts and sources? Is that something we can assert safely?
Oh, and please file bugs for all the things you've said you'll fix later, thanks!
(In reply to Nick Cameron [:nrc] from comment #63) > > > > > ::: gfx/layers/ImageContainer.h > > > @@ +70,5 @@ > > > > > > > > public: > > > > virtual ~Image() {} > > > > > > > > + virtual TextureClient* AsTextureClient() { return nullptr; } > > > > > > I don't think I like Images knowing about TextureClients, seems like we're > > > leaking across abstraction layers. > > > > I disagree. Image itself is an abstraction of the layers system that leaks > > into the media code in that sense (and has been designed to be the interface > > between media and layers). > > TextureClient is very similar to images in some ways, and the more we > > optimize layers, the more we will want other parts of gecko to paint > > directly into TextureClients. So if there is one abstraction that I want to > > see leaked in the users of layers, it is definitely TextureClient. > > > > Hmmm, that is interesting. I don't think I would want anything outside of > the compositor knowing about TextureClients, never mind outside of layers, > it just feels like too low-level an abstraction. I guess if Image is the > abstraction we present to the outside world and is the gatekeeper between > the abstract and the concrete, then it must know about TextureClients. But I > would like to separate the interface of Image we present to the rest of > Gecko and the interface we present to layers. I disagree that optimisation > will require us to expose the inner workings of layers. We should strive to > structure our API so that it can be fast without doing so and not make that > sacrifice until we have to. The API we need to expose is a buffer that is shared with the compositor side, that can be locked and unlocked and kept alive through reference counting. This is exactly what TextureClient is. If we want to wrap this into another object that presents the same API, why not, but it would not buy us anything. users of TextureClient use lower level abstractions (the gralloc buffer itself, the memory in the shmem, etc.) Right now all the gonk code uses SurfaceDescriptors which is much much lower level and unsafe and not meant for this purpose. You can't efficiently paint into something that is shared with several threads and processes without knowing that it is shared, because this implies more involved ownership and synchronization constraints. the new TextureClient has been specifically designed to provide just enough control on these things. And you also need to know what kind of memory you are sharing (something that can be mapped in memory and accessed through a pointer, something that lives in the GPU). I already said a lot about exposing TextureClient in Comment 59. > > Anyway, this is all very philosophical and doesn't really help with what to > do with Image. Is it possible to separate out an interface to present to the > world that doesn't know about TextureClients? by exposing something that has the same API and forwards the calls to an underlying TextureClient, but that doesn't buy us anything except the satisfaction of not exposing TextureClient directly. > > > > > > > ::: gfx/layers/ImageDataSerializer.cpp > > > @@ +23,5 @@ > > > > +// +-------------------+ ------+ > > > > +// | | > > > > +// | data | > > > > +// | | > > > > +// +-------------------+ > > > > > > Why don't we use a struct that embodies this structure? > > > > There is a struct for the metadata, I don't understand the quesion. > > > Then why do we have to pass around a char* or whatever it is, rather than a > pointer to that struct? And why do we have to explain the structure in a > comment? This class is used to serialize and deserialize an image into a blob of memory in order to pass it through IPC. The structure is explained in the .cpp (not exposed to the rest of the world) so that people understand how the image is laid out in the blob if there is a bug some day. We pass around a char* because it is the blob of data that is carried through IPC. It is similar to what AutoOpenSurface does but much simpler (AutoOpenSurface does that but burried in several layers of abstractions and calls, and tries to handle all formats even though the caller already knows what to expect). This is just like YCbCrImageDataSerializer but for RGB stuff. It really is just to make the image go through IPC. When the compositor side want to use the blob it wraps it into a DataSourceSurface which offers the proper level of abstraction for the image to be used (while this is just for the image to be transfered). > > > @@ +181,5 @@ > > > > +void > > > > +CompositableClient::OnTransaction() > > > > +{ > > > > + for (unsigned i = 0; i < mTexturesToRemove.size(); ++i) { > > > > + mForwarder->RemoveTexture(this, mTexturesToRemove[i]); > > > > > > why not just do this in RemoveTextureClient? The forwarder is not going to > > > do anything until the transaction anyway. > > > > There are cases where a TextureClient is removed outside of a transaction so > > I had to add this deferred mechanism. I am not super happy with this but I > > have plans to change it in another bug (when we will make it possible for > > compositables to shared TextureClients) > > > > Ah, right. If this is happening outside of a transaction, could it have a > better name than OnTransaction, that is confusing. This specifically happens during the transaction, and it is responsible for doing the work that was deferred to the transaction. Destroying the TextureClient/Host pair requires some in-transaction messaging but the refcount of the TextureClient can go out outside of a transaction so we need to delay the messaging part to until the trasaction, hence OnTransaction(). > > > > > ::: gfx/layers/composite/CompositableHost.cpp > > > @@ +81,5 @@ > > > > + it->SetCompositor(aCompositor); > > > > + it = it->GetNextSibling(); > > > > + } > > > > +} > > > > + > > > > > > Why do texture hosts like this? I see that we get a bit more code sharing > > > between compositables, but it seems to come at the expense of safety. It > > > seems pretty easy to use textures for the wrong thing. From a software > > > design 101 perspective, we should only have a list of things if each thing > > > is treated in the same way. That is not the case here, for example a front > > > buffer must not be treated in the same way as a back buffer, likewise for > > > the Y plain vs the Cb plain or whatever. The only place it really makes > > > sense is for tiling, and that uses a pretty different setup anyway. I think > > > conceptually, we have a map from the use for a texture host to the texture > > > host and representing that map as a list loses information. Therefore, I > > > strongly prefer the existing situation where the compositable decides how to > > > store its textures. Though I agree with the aim of sharing some more code > > > here, possibly by providing more helper methods in CompositableHost. > > > > Yeah, I hoped to gain more from doing this that we actually gain now, I'll > > revert to having each compositable host manage how to store it's textures, > > but it means we'll have to be careful about a few things that all > > compositables are expected to forward to their textures in some places. IMO > > it will be much more probe to errors (due to forgetting to implement some > > stuff in compositable implementations) but will be less confusing. > > Maybe the best would be to expose a way to iterate over a compositable's > > textures and all compositables would have to implement it so that the base > > class can still take care of those things where forwarding to the Textures > > is important. > > > > I think it is reasonable to expect new subclasses to jump through some hoops > in their implementation - authors of new subclasses can be expected to > understand the superclass in depth. With regards to new clients of a class, > we should be as defensive as possible. Authors of new clients will not > understand our class in as much depth. Ideally both cases would be bullet > proof, but where we have to make a choice (such as here) I prefer simplicity > for clients at the expense of some complexity for new subclasses. I came back on that with Comment 55. Basically all compositable implementors would have to repeat the exact same thing, because the linked list is here to keep alive the TextureHosts that were added until they are removed. This is independent of what the compositable shoudl present as front buffer, etc (which is the responsibility of the sub-classes). So the responsibility of keeping of the lifetime of the TextureHosts belongs to the bas CompositableHost class (for now) and the responsibility of manipulating the textures belongs to the sub-classes. So it makes sense that this linked list stays in the base class. > > > ::: gfx/layers/ipc/LayerTransaction.ipdlh > > > @@ +313,5 @@ > > > > +/** > > > > + * Tells the compositor-side which texture to use (for example, as front buffer > > > > + * if there is several textures for double buffering) > > > > + */ > > > > +struct OpUseTexture { > > > > > > Why do we need a separate message for this? Why not combine with Add? I > > > don't even see any code that uses it. I much prefer to have methods/ops that > > > take the required number of textures, makes it much harder to make a mistake. > > > > It is not used in ImageClient/Host, but the typical scenario for > > TextureClient/Host is to have a compositable client use two of them for > > double buffering and send OpUseTexture to tell the host side which one is > > now the front buffer. > > It just so happens that in the ImageClient case, we don't need double > > buffering so we never need to tell the compositor side which texture is to > > be presented. > > OpAddTexture creates a new TextureHost, which is different form swaping > > textures. > > > > So every frame we would have to call UseTexture to tell the compositor side > which host to use for what? That seems fragile and too generic. I much > prefer having more specific messages like 'Swap' since having the generic > 'Use' message opens up all kinds of possible corrupt states (the same > texture as both back and front, one of front or back updated but not the > other, a new texture for one but an old texture for other, etc.). You don't tell the compositor what to use as back buffer, you only tell it what to use as front buffer. the UseTextureMessage doesn't tell what to use the texture for, because the texture already knows it's purpose in the TextureFlags (am I a texture on white / on black for component alpha, or just a texture to composite by itself). So OpUseTexture cannot lead to corrupted state like same front and back (since "back" is not a thing of it's own, there is no "use this as a back buffer" message) or like a regular front buffer being used as the texture on black for component alpha (since what to use the texture for is an information of the texture). OpUseTexture is basically a swap where the back buffer part is implicit because the compositor side doesn't need to know about the back buffer (the bas compositable class just keeps it alive), it only needs to know what is the front buffer. All the logic that concerns the back buffer is in the client side and the host only needs to composite the front. The only reason I didn't call it OpSwapTexture is because there already is a OpSwapTexture message for deprecated textures. We could rename OpUseTexture into OpSwapTextures after we get rid of deprecated textures. The generic aspect of OpUseTexture (that it works whether or not there is a back buffer) is because I don't want video, thebes and other layers to have separate protocols. We can express everything a few simple messages and this keeps the system simple, and prevents us from being tempted to have TextureClient/TextureHost have different semantics depending on what protocol uses it. > > > Why do texture sources know about tiling? > > > > It's the tiling as in TextureImage tiling. We really need a different name > > to differentiate this form the tiled layer stuff. So it makes sense because > > the tiling here happens around the device texture, not the shared data, so > > it really belongs to TextureSource. > > > > Currently even the TextureImage tiling is mostly taken care of in the > CompositableHost. What is the motivation to move it down to the > TextureSource? The only tiling stuff that has moved into the TextureSource was what was in the TextureHost (which makes sense because the TextureImage that was doing the tiling has moved from the texture host to the TextureSource) The CompositableHost was taking care of iterating over the tiles and it still does that the exact same way. > > TextureSources are now separated from TextureHost. This lets us use some > > TextureSource classes without a TextureHost (non-OMTC layers should > > eventually use that, and the new FPS counter that Vlad wrote uses that > > also), and it lets us share a lot of code, because some of the IPC stuff > > (TextureHost) is not backend-specific (ShmemTextureHost, MemoryTextureHost, > > including YCbCr) So by separating the TextureHost and Texture source we > > don't need to implement Shmem, Memory and YCbCr hosts for each backend. And > > that is pretty neat because for instance the basic backend now only needs to > > implement a DataTextureSource and just works. > > TextureSources are also specialized (if by that you mean have > > implementations). > > Makes sense. What checks do we have to prevent mis-match of texture hosts > and sources? Is that something we can assert safely? The TextureHost creates it's texture source so it can't really mismatch. The only TextureHost that goes through a factory method is BufferTextureHost, which calls Compositor::CreateDataTextureSource returning a DataTextureSource which is what BufferTextureHost works with. As soon as we stop using much too generic factory methods, we don't have mismatch problems anymore. And that's the same with TextureClients, if a class wants a TextureClientBuffer, it should call CreateTextureClientBuffer() and not call CreateTextureClient() + assert that the factory returned exactly what it wanted. I don't like factories that return a higher level of abstraction than what the caller needs.
Nick, if you still have questions or concerns, let's try have a skype call before nical goes to sleep.
I can be on skype at midnight (Paris timezone, that would be 10am in NZ)
Comment on attachment 779423 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend Review of attachment 779423 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/composite/TextureHost.h @@ +167,5 @@ > + return nullptr; > + } > + > +protected: > + RefPtr<NewTextureSource> mNextSibling; Please explain what this is needed for, and the rationale for a singly linked list (as opposed to an array, as opposed to a doubly linked list)? Is the reason not to use MFBT LinkedListElement<T> that you want strong references?
Comment on attachment 776502 [details] [diff] [review] Part 7 - Gralloc changes (WIP) Review of attachment 776502 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/opengl/TextureClientOGL.h @@ +53,5 @@ > + > +/** > + * A TextureClient implementation to share TextureMemory that is already > + * on the GPU, for the OpenGL backend, using a platform-specific optimization > + * called Gralloc I don't understand why we now need a specific TextureClient for the Gralloc case, while so far we were very happy with using TextureClientShmem for all kinds of Shmem including Gralloc? ::: gfx/layers/opengl/TextureHostOGL.h @@ +340,5 @@ > + > + bool IsValid() const; > + > +protected: > + CompositorOGL* mCompositor; Why the discrepancy between this class having CompositorOGL* mCompositor; and the below GrallocTextureHostOGL having RefPtr<CompositorOGL> mCompositor; ? @@ +346,5 @@ > + EGLImage mEGLImage; > + gfx::SurfaceFormat mFormat; > +}; > + > +class GrallocTextureHostOGL : public TextureHost GrallocTextureHostOGL and GrallocTextureSourceOGL seem redundant with each other. I guess that that was meant to be explained by the diagram attached to this bug, but I couldn't figure out what the dashed line between TextureSource and TextureHost meant. Anyhow, this is still some serious redundancy between these 2 classes. If I understand correctly, a GrallocTextureHostOGL owns a GrallocTextureSourceOGL. Why, then, does it need to have its own mGraphicBuffer etc. as opposed to using the members of its TextureSource?
(In reply to Nicolas Silva [:nical] from comment #67) > I can be on skype at midnight (Paris timezone, that would be 10am in NZ) Note that Matt is in Toronto for most of this summer ;-)
(In reply to Benoit Jacob [:bjacob] from comment #68) > Comment on attachment 779423 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend > > Review of attachment 779423 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/layers/composite/TextureHost.h > @@ +167,5 @@ > > + return nullptr; > > + } > > + > > +protected: > > + RefPtr<NewTextureSource> mNextSibling; > > Please explain what this is needed for, Effects now take a list of texture sources. This is a better way to handle having an effect with several textures (like YCbCrEffect) where we currently have a fake texture source that has a "GetSubSource(i) method to provide access to the texture sources. > and the rationale for a singly > linked list (as opposed to an array, as opposed to a doubly linked list)? A singly linked list is extremely simple to implement, we only iterate in one direction, and even though an array could do the job, the linked list has the advantage of being compatible with the old textures and i find it more elegant. > Is the reason not to use MFBT LinkedListElement<T> that you want strong > references? Exactly
(In reply to Benoit Jacob [:bjacob] from comment #70) > (In reply to Nicolas Silva [:nical] from comment #67) > > I can be on skype at midnight (Paris timezone, that would be 10am in NZ) > > Note that Matt is in Toronto for most of this summer ;-) Matt already wants the new Textures to land. I was more aiming at getting to talk with Nick (hence NZ) :p (and Bas but he is busy at Siggraph).
(In reply to Benoit Jacob [:bjacob] from comment #69) > Comment on attachment 776502 [details] [diff] [review] > Part 7 - Gralloc changes (WIP) > > Review of attachment 776502 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/layers/opengl/TextureClientOGL.h > @@ +53,5 @@ > > + > > +/** > > + * A TextureClient implementation to share TextureMemory that is already > > + * on the GPU, for the OpenGL backend, using a platform-specific optimization > > + * called Gralloc > > I don't understand why we now need a specific TextureClient for the Gralloc > case, while so far we were very happy with using TextureClientShmem for all > kinds of Shmem including Gralloc? That's because I don't know very well how that works and I was on this area of the code when you were on PTO and then I forgot. Let's talk about it on irc or skype and we should make it work again. It kinda sounds like black magic to me, though. how does locking work on the client side? This will land without gralloc support at first (it comes later in the patch queue and i am still working on that) and prefed off on B2G so we have a bit more margin to make it work optimally there. > > ::: gfx/layers/opengl/TextureHostOGL.h > @@ +340,5 @@ > > + > > + bool IsValid() const; > > + > > +protected: > > + CompositorOGL* mCompositor; > > Why the discrepancy between this class having > > CompositorOGL* mCompositor; > > and the below GrallocTextureHostOGL having > > RefPtr<CompositorOGL> mCompositor; > > ? > > @@ +346,5 @@ > > + EGLImage mEGLImage; > > + gfx::SurfaceFormat mFormat; > > +}; > > + > > +class GrallocTextureHostOGL : public TextureHost > > GrallocTextureHostOGL and GrallocTextureSourceOGL seem redundant with each > other. I guess that that was meant to be explained by the diagram attached > to this bug, but I couldn't figure out what the dashed line between > TextureSource and TextureHost meant. Anyhow, this is still some serious > redundancy between these 2 classes. You need both a TextureHost and a TextureSource because the inheritance graph doesn't let you have the TextureHost implement both TextureHost and TextureSource (which is better in most cases and here it forces the Host to be very coupled with the Source). The redundancy is only in the data members, the two classes offer different APIs, and one cannot work without the other. > If I understand correctly, a GrallocTextureHostOGL owns a > GrallocTextureSourceOGL. Why, then, does it need to have its own > mGraphicBuffer etc. as opposed to using the members of its TextureSource? I could have kept the members only in the source (and it is a bit cleaner so I am going to change it to reflect that) but it's not that big of a deal because even though there are redundant members, the functionality is not redundant (the TextureHost does not offer the apis to draw the buffer, just some info like size, renderstate, format, etc.). Also GrallocTextureSourceOGL is only created by the GrallocTextureHostOGL, and they have the same lifetime so there is no risk of separating them and going into weird states.
Attached patch Part 7 - Gralloc changes (WIP) (obsolete) (deleted) — Splinter Review
Some update on the gralloc stuff. I am replacing the use of SurfaceDescriptor in the Gonk and Media code with a simple GrallocData struct that contains roughly the same information except that it doesn't use PGrallocBufferActor (and is independent from the internals of IPDL \o/). Also removed the redundant data members that Benoit pointed out in GrallocTextureHostOGL.
Attachment #776502 - Attachment is obsolete: true
Attachment #780353 - Flags: feedback?(bjacob)
Blocks: PTexture
Comment on attachment 780353 [details] [diff] [review] Part 7 - Gralloc changes (WIP) Review of attachment 780353 [details] [diff] [review]: ----------------------------------------------------------------- Feedback+ (we just had a skype call, went over this in detail).
Attachment #780353 - Flags: feedback?(bjacob) → feedback+
(In reply to Nicolas Silva [:nical] from comment #59) > (In reply to Bas Schouten (:bas.schouten) from comment #58) > > Comment on attachment 776494 [details] [diff] [review] > > Part 4 - Texture refactoring + OGL backend > > > > Review of attachment 776494 [details] [diff] [review]: > > ----------------------------------------------------------------- > > ::: gfx/layers/client/ImageClient.cpp > > @@ +86,5 @@ > > > + if (mLastPaintedImageSerial == image->GetSerial()) { > > > + return true; > > > + } > > > + > > > + if (image->AsTextureClient()) { > > > > I wasn't aware one of the big goals of the refactoring was to get rid of > > them, I didn't really feel they were a major problem anyway? ;) In general I > > think there's a trade-off between have polymorphism in functions like this > > and having a ridiculous amount of abstraction levels. Or condensing too many > > abstraction levels into a big class. I'm not sure whether an > > ImageClientSingleYCbCr, ImageClientSingleTextureClient and > > ImageClientSingleSurface or something are really the answer here, especially > > if an ImageClient needs to be able to morph between types. In this case I > > can't think of a particularly good solution since image objects are > > LayerManager independent for all kinds of good reasons :). We could move > > this 'GenerateFrontBufferFromImage' or something into a separate function so > > the if elif lives there, but introducing some intermediate interface here > > seems like a poor choice to me. > > I am not entirely sure which abstraction you criticize here, so I'll assume > you are talking about Image using and exposing a TextureClient. > The following rather long comment goes a bit beyond your comments but I have > the feeling we are not all on the same page about the rôle of texture client > and the problems that this refactoring is trying to fix. > > Having shared Image classes manipulate texture clients is something we > really really want. Media code, screenshots and other parts of Gecko keep > accessing the shared buffer while we composite it or when we could be > writing to it in, say, the ImageBridge thread. The Image really needs to > control the lifetime of its TextureClient and Lock/Unlock appropriately. > > This is definitely one of the big (if not the biggest) goal of this > refactoring. > > Remember that Image is an abstraction that belongs to the layers systeme > (it's not an abstraction of media), and TextureClient is the only safe way > to access shared buffers, so I'll insist on this, exposing TextureClient to > Image is not leaking the design of layers in Images. > > A good example of an abstraction that badly leaked everywhere is > SurfaceDescriptor. It is supposed to be a serialization helper mostly for > our IPDL glue, and it is all over the media code especially in B2G camera > code. rather than exposing naked surface descriptors with no synchronization > guarantee, we should expose either the right low-level buffer type (gralloc, > etc.) when we know that the buffer has not been presented to layers yet, or > a TextureClient if it is already in layers and potentially accessed by the > entire world. > > If you look at what ImageClientSingle was doing before, it was: > > if (type == SHARED_FOO) { > desc = casted_to_foo->GiveMeYourSurfaceDescriptor() > sendTheDescriptor(foo) > } else if (type == SHARED_BAR) { > desc = casted_to_bar->GiveMeYourSurfaceDescriptor() > sendTheDescriptor(foo) > #ifdef MOZ_WIDGET_GONK > } else if (type = SHARED_TYPE_THAT_DOES_THE_SAME_BUT_ONLY_FOR_B2G) { > desc = casetd_to_b2g_thin->GiveMeYourSurfaceDescriptor() > sendTheDescriptor(foo) > #endif > } else if ... > // and so on for every image type > } > > All shared image types where doing the same thing, and didn't shared an > interface. There is nothing to gain from all of these if/else blocks, and > worse it even forced us to #ifdef B2G in the middle of some platform > independent code! > > So it just makes sense that since shared image type *have* to use texture > clients for the reason I explained earlier, they should also expose it in a > virtual method. once you expose a TextureClient to any compositable, the > later knows exactly what to do with it the base class does). So reducing the > if/else awkwardness is just a nice side-effect from forcing Images to use a > safe abstractions. > > Again, TextureClient should be the interface between layers and code that > wants to paint and expects the painted surface to be composited efficiently > (without copying) and safely (with proper reference counting and insurance > that the compositor will not kill or composite the surface while you are > painting to it. TextureClient is what we should be exposing to stuff as much > as we can. It is not yet-another-abstraction that sits in the middle, really. > > we could go further and remove the notion of Image. The only thing that > Image does and TextureClient does not is offer a container for a surface > that is not shared with the compositor. So if we don't like having too many > abstractions, let's just add that feature to the TextureClient API and kill > images completely (media code could manipulate a layers::TextureClient just > as easily as manipulating a layers::Image. And it would manipulate something > that the compositor knows how to work with. Plus, Image and it's shared > flavors represent a feature that we partially duplicate on other layer types. > > A method like GenerateFrontBufferFromImage does not cover the needs of our > media code which needs to share the same buffer to lots and lots of threads > and processes for efficiency. > When we finally get the time to implement it, we will even have some of the > decoder libraries writing directly inside a TextureClient (zero copy until > texture upload \o/). > > This subject is really at the core of this refactoring, so it is important > that we all understand it the same way and take the constraints of the video > stack into account. Let's not hesitate to schedule a meeting about it if > discussing it in a more synchronous fashion can help. I was replying to Nick's comment ;) This is why you should read reviews in splinter and not in the bug comment ;-).
Comment on attachment 779423 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend Review of attachment 779423 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/ImageDataSerializer.cpp @@ +52,5 @@ > + SurfaceBufferInfo* info = GetBufferInfo(mData); > + info->width = aSize.width; > + info->height = aSize.height; > + info->format = aFormat; > + info->offset = MOZ_ALIGN_WORD(sizeof(SurfaceBufferInfo)); I don't see anything having happened here to address my previous comments? Let me know when those comments are addressed and I'll re-review this bit. To restate my comment for ease of reading (which was a reply to Nick): 'I think because offset is set to be variable. I'm not sure why though since MOZ_ALIGN_WORD(sizeof(SurfaceBufferInfo)); is a known constant. We should fix that :). (fwiw the size of SurfaceBufferInfo -will- be word-aligned (even if you take offset out) since gfx::SurfaceFormat should be 4 bytes. If you have some platform where this might not be true just union the format with a uint32_t but I honestly doubt we support such a platform. We should also consider making it easier to switch alignment if we want to (Moz2D offers some useful stuff here, probably so does MFBT).' ::: gfx/layers/client/TextureClient.cpp @@ +98,5 @@ > +bool > +MemoryTextureClient::Allocate(uint32_t aSize) > +{ > + MOZ_ASSERT(!mBuffer); > + mBuffer = new uint8_t[aSize]; I'm a little bit concerned that this leaks if the buffer never ends up being sent across, but I'm not 100% sure of a good solution to this.
Comment on attachment 776497 [details] [diff] [review] Part 6 - D3D11 backend Review of attachment 776497 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/d3d11/TextureD3D11.cpp @@ +223,5 @@ > + (int) aSurface->Stride()); > + for (int j = 0; j < aSurface->GetSize().height; ++j) { > + memcpy(reinterpret_cast<uint8_t*>(mapInfo.pData) + mapInfo.RowPitch * j, > + aSurface->GetData() + aSurface->Stride() * j, > + minStride); Don't copy the minimum stride, copy the width * bytesPerPixel. That avoids the std::min call and might reduce the amount copied.
Attachment #776497 - Flags: review?(bas) → review+
Fixed the redundant offset in ImageDataSerializer (I used it as a crude sanity check but I understand it doesn't feel right). Also addressed the concern about leaking buffers if a TextureClient is never shared with the compositor by checking in the texture client'd destructor if it has been shared and if either the client or the host is supposed to on the data in the texture flags (if TEXTURE_DEALLOCATE_CLIENT and TEXTURE_DEALLOCATE_HOST are both not set it means the data belongs to some other system (say, gonk camera) and we don;t ant to deallocate it. I also added a flag to mark textures as "Immutable". Once a texture is marked immutable nobody can write into (only read) and we can safely not Lock/Unlock or do double buffering (important be cause we don't yet have cross process locks on all platform, so now we can assert that the Texture is immutable if we are using it in a scenario where we should have been locking the texture. A typical use case for immutable textures is video-frames. Texture flags must be set before the texture is shared with the compositor side (so there is no synchronization needed around marking a texture as immutable).
Attachment #779423 - Attachment is obsolete: true
Attachment #779423 - Flags: review?(ncameron)
Attachment #779423 - Flags: review?(bas)
Attachment #781778 - Flags: review?(bas)
Depends on: 898828
Comment on attachment 781778 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) Review of attachment 781778 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/Compositor.h @@ +189,5 @@ > + * Return true if the effect type is supported. > + * > + * By default Compositor implementations should support all effects but in > + * some rare cases it is not possible to support an effect efficiently. > + * this is the case for BasicCompositor with EffetcYCbCr. nit: Typo, also, is this true? I mean, it can do it, I understand you might rather want to do it on the decoding thread though. ::: gfx/layers/CompositorTypes.h @@ +47,3 @@ > > +// A texture host that supports tiling > +const TextureFlags TEXTURE_FRONT = 1 << 10; If we're going to combine multiple bitwise flags in a single value. If you want to have jumps and have groups do the intuitive thing and make them 4-bit aligned (so that in hex view they'll be intuitively grouped, so I'd make this 12). @@ +65,5 @@ > +const TextureFlags TEXTURE_DEALLOCATE_HOST = 1 << 22; > +// After being shared ith the compositor side, an immutable texture is never > +// modified, it can only be read. It is safe to not Lock/Unlock immutable > +// textures. > +const TextureFlags TEXTURE_IMMUTABLE = 1 << 23; And this 24 probably if you're grouping anyway? :) ::: gfx/layers/ImageDataSerializer.cpp @@ +31,5 @@ > +struct SurfaceBufferInfo > +{ > + uint32_t width; > + uint32_t height; > + gfx::SurfaceFormat format; You could simply guarantee the size of this struct by putting in a union { struct { } uint8_t mPadding[32] }; I suppose, although that's a little ugly as well. @@ +35,5 @@ > + gfx::SurfaceFormat format; > + > + static uint32_t GetOffset() > + { > + return gfx::GetAlignedStride<16>(sizeof(SurfaceBufferInfo)); I still think it's a little nasty to do a runtime calculation for something that is statically known, maybe the compiler will figure this out I suppose. But it's a little ugly :). @@ +65,5 @@ > + // stride, we may decide to change that if we want aligned stride. > + gfxIntSize gfxSize = gfxIntSize(aSize.width, aSize.height); > + uint32_t bufsize = aSize.height * gfx::BytesPerPixel(aFormat) * aSize.width; > + return SurfaceBufferInfo::GetOffset() > + + gfx::GetAlignedStride<16>(bufsize); This is a little unintuitive as normally you'd use a 16 byte aligned stride in the image for each line, or not at all, not a big problem though :). ::: gfx/layers/ipc/CompositableTransactionParent.cpp @@ +202,5 @@ > } > + case CompositableOperation::TOpUseTexture: { > + const OpUseTexture& op = aEdit.get_OpUseTexture(); > + if (op.textureID() == 0) { > + NS_WARNING("Invald texture ID"); nit: Invalid @@ +219,5 @@ > + } > + case CompositableOperation::TOpAddTexture: { > + const OpAddTexture& op = aEdit.get_OpAddTexture(); > + if (op.textureID() == 0) { > + NS_WARNING("Invald texture ID"); Same nit. ::: gfx/layers/ipc/SharedPlanarYCbCrImage.cpp @@ +128,5 @@ > + NS_ABORT_IF_FALSE(!mTextureClient->IsAllocated(), > + "This image already has allocated data"); > + if (!mTextureClient->Allocate(aSize)) { > + return nullptr; > + } I think it's better to make this return a bool and then have users call AllocateBuffer first and then GetBuffer. This is a little confusing since it suggests it might transfer ownership. I guess this is a problem with the existing API though, we might want to track that somewhere and fix it. ::: gfx/layers/ipc/SharedPlanarYCbCrImage.h @@ +104,5 @@ > + virtual bool Allocate(PlanarYCbCrImage::Data& aData); > + virtual uint8_t* AllocateBuffer(uint32_t aSize) MOZ_OVERRIDE; > + // needs to be overriden because the parent class sets mBuffer which we > + // do not want to happen. > + uint8_t* AllocateAndGetNewBuffer(uint32_t aSize) MOZ_OVERRIDE; nit: virtual ::: gfx/layers/opengl/CompositorOGL.cpp @@ +1415,5 @@ > +CompositorOGL::CreateDataTextureSource(TextureFlags aFlags) > +{ > + RefPtr<DataTextureSource> result = > + new TextureImageTextureSourceOGL(mGLContext, !(aFlags & ForceSingleTile)); > + return result.forget(); nit: In mfbt you can just do return result; with TemporaryRef<T> and it'll do the sane thing.
Attachment #781778 - Flags: review?(bas) → review+
Depends on: 899044
(In reply to Bas Schouten (:bas.schouten) from comment #83) > Comment on attachment 781778 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) > > Review of attachment 781778 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/layers/Compositor.h > @@ +189,5 @@ > > + * Return true if the effect type is supported. > > + * > > + * By default Compositor implementations should support all effects but in > > + * some rare cases it is not possible to support an effect efficiently. > > + * this is the case for BasicCompositor with EffetcYCbCr. > > nit: Typo, also, is this true? I mean, it can do it, I understand you might > rather want to do it on the decoding thread though. Yes, the basic compositor asserts and crash with EffectYCbCr. It agree we probably want to do the conversion somewhere else for efficiency, but I like to have the compositor expose that it cannot do something so that backend-independent code can anticipate and do the right thing rather than expect things to work and then crash (if only for a last resort fallback path). > > You could simply guarantee the size of this struct by putting in a union { > struct { } uint8_t mPadding[32] }; I suppose, although that's a little ugly > as well. > > @@ +35,5 @@ > > + gfx::SurfaceFormat format; > > + > > + static uint32_t GetOffset() > > + { > > + return gfx::GetAlignedStride<16>(sizeof(SurfaceBufferInfo)); > > I still think it's a little nasty to do a runtime calculation for something > that is statically known, maybe the compiler will figure this out I suppose. > But it's a little ugly :). I find explicitly computing the offset is much easier to read and this is not a critical path for performances. So I prefer to keep the (potential (i mean the code is inlined and only uses constants so I would expect it to be resolved at compile time on sufficiently optimized build, but anyway I think it doesn't matter) runtime computation over using a union which is less self-documented and harder to quickly read and understand. > I think it's better to make this return a bool and then have users call > AllocateBuffer first and then GetBuffer. This is a little confusing since it > suggests it might transfer ownership. I guess this is a problem with the > existing API though, we might want to track that somewhere and fix it. Exactly. The Image API in general and its inheritance graph needs a bit of cleanup. I fixed the other comments.
Nical: I'm sorry I haven't got round to doing the review yet, I've been drowning in Windows compositor bugs. I have thought about the things we discussed on the phone and I think I am happy with the important things. I think you should add a big comment somewhere explaining how the compositables are essentially faking PTexture, if you haven't done that already. Anyway, given that I've looked at the patch a few times, and we're discussed all my worries, and that bas has r+ed it, and we're pretty close to the uplift, I'm happy for you to land this without my review - I'm confident I will r+ it and you can address any minor things as a follow up. I will do a thorough review tomorrow or Thursday.
oops you feedback+ed it and I thought you r+ed it instead. Sorry I put "carrying nrc:r+" on the patch name. I have already put that comment about PTexture.
Depends on: 899583
Attachment #776495 - Flags: review?(matt.woodrow)
Blocks: 900012
Depends on: 900133
Blocks: 900244
Comment on attachment 781778 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) Review of attachment 781778 [details] [diff] [review]: ----------------------------------------------------------------- I haven't finished my review yet. Still outstanding are basically all the host files, but I'm done for the day. More tomorrow. 95% of these are nits. ::: gfx/gl/GLTextureImage.h @@ +171,5 @@ > * aFrom - offset in the source to update from > */ > virtual bool DirectUpdate(gfxASurface *aSurf, const nsIntRegion& aRegion, const nsIntPoint& aFrom = nsIntPoint(0,0)) = 0; > + // Moz2D equivalent > + bool UpdateFromDataSource(gfx::DataSourceSurface *aSurf, should it also be virtual? ::: gfx/layers/CompositorTypes.h @@ +19,5 @@ > * side to host side when textures and compositables are created. Usually set > * by the compositableCient, they may be modified by either the compositable or > * texture clients. > + * > + * XXX - switch to all caps constant names which seems to be the standard in gecko Please file a bug for this too ::: gfx/layers/Effects.h @@ +7,5 @@ > #define MOZILLA_LAYERS_EFFECTS_H > > #include "mozilla/gfx/Matrix.h" > #include "mozilla/layers/Compositor.h" > +#include "mozilla/layers/TextureHost.h" Do we really need this? ::: gfx/layers/ImageDataSerializer.cpp @@ +40,5 @@ > + } > +}; > +} // anonymous namespace > + > +static SurfaceBufferInfo* GetBufferInfo(uint8_t* aBuffer) new line for type ::: gfx/layers/basic/BasicCompositor.h @@ +32,5 @@ > BasicCompositor(nsIWidget *aWidget); > > virtual ~BasicCompositor(); > > + don't add this blank line ::: gfx/layers/client/ClientCanvasLayer.h @@ +35,5 @@ > { > MOZ_COUNT_DTOR(ClientCanvasLayer); > + if (mCanvasClient) { > + mCanvasClient->Detach(); > + mCanvasClient = nullptr; no point setting to null ::: gfx/layers/client/ClientLayerManager.cpp @@ +326,5 @@ > + case EditReply::TReplyTextureRemoved: { > + // XXX - to manage reuse of gralloc buffers, we'll need to add some > + // glue code here to find the TextureClient and invoke a callback to > + // let the camera know that the gralloc buffer is not used anymore on > + // the compositor side and that it can reuse it. can you file a bug for this and the bug number to the comment please ::: gfx/layers/client/ClientThebesLayer.h @@ +28,5 @@ > virtual ~ClientThebesLayer() > { > + if (mContentClient) { > + mContentClient->Detach(); > + mContentClient = nullptr; no point setting to null just as we die ::: gfx/layers/client/CompositableClient.cpp @@ +29,5 @@ > CompositableClient::~CompositableClient() > { > MOZ_COUNT_DTOR(CompositableClient); > Destroy(); > + MOZ_ASSERT(mTexturesToRemove.size() == 0, "would leak textures pending fore deletion"); typo ::: gfx/layers/client/CompositableClient.h @@ +109,5 @@ > */ > uint64_t GetAsyncID() const; > > + /** > + * Tells the Compositor to create a TextureHost for this TextureClient. presumably it keeps a reference to aClient or something too? Should add it to the comment, it reads a bit strange atm. @@ +120,5 @@ > + */ > + virtual void RemoveTextureClient(TextureClient* aClient); > + > + /** > + * A hook for the Compositable to execute whatever it held off for next trasanction. the next transaction (sp) @@ +127,5 @@ > + > + /** > + * A hook for the when the Compositable is detached from it's layer. > + */ > + virtual void Detach() {} If this is jsut a hook (ie, doesn't do any detaching) then OnDetach is a better name. @@ +133,3 @@ > protected: > + // The textures to destroy in the next transaction; > + std::vector<uint64_t> mTexturesToRemove; prefer nsTArray to std::vector @@ +133,4 @@ > protected: > + // The textures to destroy in the next transaction; > + std::vector<uint64_t> mTexturesToRemove; > + uint64_t mNextTextureID; why by id and not by pointer? Even if we just look the id up in the texture, it makes debugging easier if you keep a reference to the texture. ::: gfx/layers/client/ImageClient.cpp @@ +26,5 @@ > TextureFlags aFlags) > { > RefPtr<ImageClient> result = nullptr; > switch (aCompositableHostType) { > + case COMPOSITABLE_IMAGE: what is this for, if it does the same thing as BUFFER_IMAGE_SINGLE? @@ +38,5 @@ > case BUFFER_IMAGE_BUFFERED: > + if (gfxPlatform::GetPlatform()->UseDeprecatedTextures()) { > + result = new DeprecatedImageClientSingle(aForwarder, aFlags, BUFFER_IMAGE_BUFFERED); > + } else { > + // ImageClientBuffered was a hack for async-video only, and the new textures So is ImageClientBuffered used anywhere? If not, please remove it. @@ +95,5 @@ > + } > + > + if (image->AsSharedImage() && image->AsSharedImage()->GetTextureClient()) { > + // fast path: no need to allocate and/or copy image data > + RefPtr<TextureClient> tex = image->AsSharedImage()->GetTextureClient(); tex is a bit of a rubbish variable name, please use something easier to parse ::: gfx/layers/client/ImageClient.h @@ +85,5 @@ > + TextureFlags mTextureFlags; > +}; > + > +/** > + * An image client which uses a two texture clients. extraneous 'a' ::: gfx/layers/client/TextureClient.cpp @@ +26,5 @@ > namespace layers { > > + > + > + we don't need so many newlines here @@ +41,5 @@ > +TextureClient::ShouldDeallocateInDestructor() const > +{ > + return IsAllocated() && > + !IsSharedWithCompositor() && > + !(GetFlags() & (TEXTURE_DEALLOCATE_HOST|TEXTURE_DEALLOCATE_CLIENT)); spaces around | ::: gfx/layers/client/TextureClient.h @@ +31,5 @@ > + > +/** > + * TextureClient is the abstraction that allows us to share data between the > + * content and the compositor side. > + * TextureClient can also provide with some more more "producer" facing APIs typo: more more please make explicit that is is the content-side class @@ +68,5 @@ > + * understood by the IPC system. There are two ways to use it: > + * - Use it to serialize image data that is not IPC-friendly (most likely > + * involving a copy into shared memory) > + * - preallocate it and paint directly into it, which avoids copy but requires > + * the painting code to be aware of TextureClient (or at least the underlying whitespace @@ +69,5 @@ > + * - Use it to serialize image data that is not IPC-friendly (most likely > + * involving a copy into shared memory) > + * - preallocate it and paint directly into it, which avoids copy but requires > + * the painting code to be aware of TextureClient (or at least the underlying > + * shared memory) . @@ +73,5 @@ > + * shared memory) > + * > + * There is always one and only one TextureClient per TextureHost, and the > + * TextureClient/Host pair only owns one buffer of image data through its > + * lifetime. This means that to the lifetime of the underlying shared data extraneous 'to'? @@ +109,5 @@ > + { > + return mID; > + } > + > + void SetNextSibling(TextureClient* aNext) please comment to explain this sibling business @@ +125,5 @@ > + virtual bool ToSurfaceDescriptor(SurfaceDescriptor& aDescriptor) = 0; > + > + void SetFlags(TextureFlags aFlags) { mFlags = aFlags; } > + > + void AddFlags(TextureFlags aFlags) { { on new line @@ +127,5 @@ > + void SetFlags(TextureFlags aFlags) { mFlags = aFlags; } > + > + void AddFlags(TextureFlags aFlags) { > + // make sure we don't deallocate on both client and host; > + MOZ_ASSERT(!(aFlags & TEXTURE_DEALLOCATE_CLIENT && aFlags & TEXTURE_DEALLOCATE_HOST)); clearer to use XOR operator @@ +132,5 @@ > + if (aFlags & TEXTURE_DEALLOCATE_CLIENT) { > + mFlags &= ~TEXTURE_DEALLOCATE_HOST; > + } else if (aFlags & TEXTURE_DEALLOCATE_HOST) { > + mFlags &= ~TEXTURE_DEALLOCATE_CLIENT; > + } I don't think we need this lot as well as the assert. Either get rid of these (preferred) or add warnings and get rid of the assert (if you're paranoid). Hmmm, actually I see they are different things actually. I think it should be an error to add a different deallocate if one is already set, rather than silently accepting it. @@ +137,5 @@ > + mFlags |= aFlags; > + } > + > + void RemoveFlags(TextureFlags aFlags) { > + mFlags &= (~aFlags); should probably assert that someone will still deallocate the texture @@ +143,5 @@ > + > + TextureFlags GetFlags() const { return mFlags; } > + > + /** > + * After being shared ith the compositor side, an immutable texture is never *with ::: gfx/layers/composite/TextureHost.h @@ +219,5 @@ > + * > + * This is expected to be very slow and should be used for mostly debugging. > + * XXX - implement everywhere and make it pure virtual. > + */ > + virtual TemporaryRef<gfx::DataSourceSurface> ReadBack() { return nullptr; }; make this ifdef DEBUG @@ +338,5 @@ > + * TextureHosts of a given CompositableHost always have different IDs. > + * TextureHosts of different CompositableHosts, may have the same ID. > + * Zero is always an invalid ID. > + */ > + uint64_t GetID() const { return mID; } is it useful for a texture host to know its ID? As opposed to only the compositable knowing it and mapping it to an actual texture host? @@ +356,5 @@ > + /** > + * Debug facility. > + * XXX - cool kids use Moz2D > + */ > + virtual already_AddRefed<gfxImageSurface> GetAsSurface() = 0; make it ifdef DEBUG @@ +376,5 @@ > + * XXX - more doc here > + */ > + virtual LayerRenderState GetRenderState() > + { > + // By default we return an empty render state, this should be overriden dd @@ +377,5 @@ > + */ > + virtual LayerRenderState GetRenderState() > + { > + // By default we return an empty render state, this should be overriden > + // by the TextureHost implementations that are used on B2G with Composer2D Composer2D is not actually tied to b2g, although that is the only platform that uses it at the moment. Only the Hwc implementation is b2g-specific. Composer2D is a general purpose API. ::: gfx/layers/ipc/CompositableTransactionParent.cpp @@ +255,5 @@ > + > + // if it is not the host that deallocates the shared data, then we need > + // to notfy the client side to tell when it is safe to deallocate or > + // reuse it. > + if (!(flags & TEXTURE_DEALLOCATE_HOST)) { make this an else branch of the if above @@ +278,5 @@ > + texture->Updated(&region); > + } else { > + // no region means invalidate the entire surface > + texture->Updated(nullptr); > + } texture->Updated(op.region().type() == MaybeRegion::TnsIntRegion ? &op.region().get_nsIntRegion() : nullptr); I think that is clearer than the if-else block, but won't object if you prefer it as is ::: gfx/layers/ipc/CompositorParent.h @@ +175,5 @@ > */ > static void SetTimeAndSampleAnimations(TimeStamp aTime, bool aIsTesting); > > + /** > + * Returns true if the calling thrad is the compositor thread. typo: thrad @@ +177,5 @@ > > + /** > + * Returns true if the calling thrad is the compositor thread. > + */ > + static bool IsInCompositorThread(); Seems an odd place for this. Don't we have a utils class for things like this - ShadowLayersUtils? ::: gfx/layers/ipc/ImageBridgeChild.cpp @@ +98,5 @@ > + uint64_t aTexture, > + TextureFlags aFlags) > +{ > + mTxn->AddNoSwapEdit(OpRemoveTexture(nullptr, aCompositable->GetIPDLActor(), > + aTexture, indent @@ +107,5 @@ > +ImageBridgeChild::UseTexture(CompositableClient* aCompositable, > + TextureClient* aTexture) > +{ > + mTxn->AddNoSwapEdit(OpUseTexture(nullptr, aCompositable->GetIPDLActor(), > + aTexture->GetID())); indent @@ -338,5 @@ > AutoEndTransaction _(mTxn); > > - if (mTxn->IsEmpty()) { > - return; > - } Why are you removing this? ::: gfx/layers/ipc/LayerTransaction.ipdlh @@ +314,5 @@ > +/** > + * Tells the compositor-side which texture to use (for example, as front buffer > + * if there is several textures for double buffering) > + */ > +struct OpUseTexture { Can we remove this for now? Seeing as its not used and we discussed a different approach for the long run. @@ +399,5 @@ > union EditReply { > OpContentBufferSwap; > OpTextureSwap; > + > + // new stuff remove this comment ::: gfx/layers/ipc/ShadowLayers.cpp @@ +404,5 @@ > +ShadowLayerForwarder::UpdatedTexture(CompositableClient* aCompositable, > + TextureClient* aTexture, > + nsIntRegion* aRegion) > +{ > + printf("ShadowLayerForwarder::UpdatedTexture %i\n", (int)aTexture->GetID()); remove printf ::: gfx/layers/ipc/SharedPlanarYCbCrImage.cpp @@ +67,5 @@ > + // since it will trigger a full copy. > + if (!mTextureClient->IsAllocated()) { > + Data data = aData; > + if (!Allocate(data)) { > + printf("SharedPlanarYCbCrImage::SetData failed to allocate :(\n"); remove printf or change to warning ::: gfx/layers/ipc/SharedRGBImage.cpp @@ +62,5 @@ > return nullptr; > } > > + if (gfxPlatform::GetPlatform()->UseDeprecatedTextures()) { > + nsRefPtr<DeprecatedSharedRGBImage> rgbImageDep = static_cast<DeprecatedSharedRGBImage*>(image.get()); this block is over-indented ::: gfx/layers/opengl/CompositingRenderTargetOGL.h @@ +151,5 @@ > } > > + gfx::SurfaceFormat GetFormat() const MOZ_OVERRIDE > + { > + // XXX - Should it be implemented ? is the above assert true ? it is not true. Please file a bug for these missing methods. ::: gfx/layers/opengl/TextureClientOGL.cpp @@ +14,5 @@ > namespace mozilla { > namespace layers { > > +SharedTextureClientOGL::SharedTextureClientOGL() > +: mHandle(0), mIsCrossProcess(false), mInverted(false) line for each @@ +66,5 @@ > +{ > + MOZ_COUNT_CTOR(GrallocTextureClientOGL); > +} > + > +GrallocTextureClientOGL::GrallocTextureClientOGL(CompositableClient* aCompositable, Where is this class declared? It should probably be in TextureClientOGL.h @@ +93,5 @@ > + > +bool > +GrallocTextureClientOGL::ToSurfaceDescriptor(SurfaceDescriptor& aOutDescriptor) > +{ > + printf_stderr(" -- GrallocTextureClientOGL::ToSurfaceDescriptor \n"); remove printf @@ +155,5 @@ > + case gfx::FORMAT_R5G6B5: > + format = android::PIXEL_FORMAT_RGB_565; > + break; > + case gfx::FORMAT_A8: > + remove new line ::: gfx/layers/opengl/TextureClientOGL.h @@ +10,5 @@ > #include "ISurfaceAllocator.h" // For IsSurfaceDescriptorValid > +#include "GLContext.h" // For SharedTextureHandle > + > +#ifdef MOZ_WIDGET_GONK > +#include <ui/GraphicBuffer.h> what do we need this for? ::: gfx/thebes/gfxPlatform.cpp @@ +259,5 @@ > + XRE_GetProcessType() == GeckoProcessType_Default && > + Preferences::GetBool("layers.prefer-memory-over-shmem", true); > + > + mLayersUseDeprecated = > + Preferences::GetBool("layers.use-deprecated-textures", true); Should initialise these lazily, we don't want to impact startup time by putting things here.
Depends on: 900393
(In reply to Nick Cameron [:nrc] from comment #89) > Comment on attachment 781778 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) > > Review of attachment 781778 [details] [diff] [review]: > ----------------------------------------------------------------- > ::: gfx/gl/GLTextureImage.h > @@ +171,5 @@ > > * aFrom - offset in the source to update from > > */ > > virtual bool DirectUpdate(gfxASurface *aSurf, const nsIntRegion& aRegion, const nsIntPoint& aFrom = nsIntPoint(0,0)) = 0; > > + // Moz2D equivalent > > + bool UpdateFromDataSource(gfx::DataSourceSurface *aSurf, > > should it also be virtual? Right now it is just a wrapper around DirectUpdate converting a DataSourceSurface into a gfxImageSurface, So it is not useful to make it virtual until we really Moz2Dify GLContext. > ::: gfx/layers/Effects.h > @@ +7,5 @@ > > #define MOZILLA_LAYERS_EFFECTS_H > > > > #include "mozilla/gfx/Matrix.h" > > #include "mozilla/layers/Compositor.h" > > +#include "mozilla/layers/TextureHost.h" > > Do we really need this? yes there is some inline code that use TextureSource in the header. The code was explicitly marked inline so I didn't move it to the .cpp. I guess we could move it to the .cpp > ::: gfx/layers/client/ClientCanvasLayer.h > @@ +35,5 @@ > > { > > MOZ_COUNT_DTOR(ClientCanvasLayer); > > + if (mCanvasClient) { > > + mCanvasClient->Detach(); > > + mCanvasClient = nullptr; > > no point setting to null There is often a lot of important stuff triggered my the destructors in there and explicitly setting to null here just make gdb backtraces easier to read. It's not much but it doesn't make the code any harder to understand. > ::: gfx/layers/client/CompositableClient.h > @@ +109,5 @@ > > */ > > uint64_t GetAsyncID() const; > > > > + /** > > + * Tells the Compositor to create a TextureHost for this TextureClient. > > presumably it keeps a reference to aClient or something too? Should add it > to the comment, it reads a bit strange atm. Actually it doesn't keep a reference because CompositableClients usually keep track of all of their buffers explicitly (by contrast of the host who doesn't keep track of the back buffer and must follow what the client dectates. This method just creates and sends the OpAddTexture message. > @@ +133,4 @@ > > protected: > > + // The textures to destroy in the next transaction; > > + std::vector<uint64_t> mTexturesToRemove; > > + uint64_t mNextTextureID; > > why by id and not by pointer? Even if we just look the id up in the texture, > it makes debugging easier if you keep a reference to the texture. Because it's the texture client's destructor that triggers adding IDs in this array so we can't keep a pointer to the texture at this point. Or are you referring to mNextTextureID? this is just a counter that is incremented to generate the ID of the next texture client. it doesn't refer to any actually create texture. > > ::: gfx/layers/client/ImageClient.cpp > @@ +26,5 @@ > > TextureFlags aFlags) > > { > > RefPtr<ImageClient> result = nullptr; > > switch (aCompositableHostType) { > > + case COMPOSITABLE_IMAGE: > > what is this for, if it does the same thing as BUFFER_IMAGE_SINGLE? It is there to replace BUFFER_IMAGE_SINGLE, except both need to coexist while new textures are still behind a pref. > @@ +338,5 @@ > > + * TextureHosts of a given CompositableHost always have different IDs. > > + * TextureHosts of different CompositableHosts, may have the same ID. > > + * Zero is always an invalid ID. > > + */ > > + uint64_t GetID() const { return mID; } > > is it useful for a texture host to know its ID? As opposed to only the > compositable knowing it and mapping it to an actual texture host? the ID is only part of the poor man's PTexture. it is only used to dispatch to the right Texture when an IPDL message is received. > > @@ +356,5 @@ > > + /** > > + * Debug facility. > > + * XXX - cool kids use Moz2D > > + */ > > + virtual already_AddRefed<gfxImageSurface> GetAsSurface() = 0; > > make it ifdef DEBUG This is used in MOZ_DUMP_PAINTING code, I am not sure whether we sometimes do MOZ_DUMP_PAINTING on release builds, I need to check. > @@ +377,5 @@ > > + */ > > + virtual LayerRenderState GetRenderState() > > + { > > + // By default we return an empty render state, this should be overriden > > + // by the TextureHost implementations that are used on B2G with Composer2D > > Composer2D is not actually tied to b2g, although that is the only platform > that uses it at the moment. Only the Hwc implementation is b2g-specific. > Composer2D is a general purpose API. I don't know anything about this API do we have some doc somewhere? > > ::: gfx/layers/ipc/CompositableTransactionParent.cpp > @@ +255,5 @@ > > + > > + // if it is not the host that deallocates the shared data, then we need > > + // to notfy the client side to tell when it is safe to deallocate or > > + // reuse it. > > + if (!(flags & TEXTURE_DEALLOCATE_HOST)) { > > make this an else branch of the if above compositable->RemoveTextureHost(op.textureID()); needs to happen in beteen the two. > > + static bool IsInCompositorThread(); > > Seems an odd place for this. Don't we have a utils class for things like > this - ShadowLayersUtils? some methods of ISurfaceAllocator and friends need to know on which side they are. The easiest is for Child and Parent to override this because by definition they know on which side they are (although they share some code for allocation and deallocation at a higher level that doesn't have that info so we get it through polymorphism). > @@ -338,5 @@ > > AutoEndTransaction _(mTxn); > > > > - if (mTxn->IsEmpty()) { > > - return; > > - } > > Why are you removing this? I think it's a mistake, added it back. > > ::: gfx/layers/ipc/LayerTransaction.ipdlh > @@ +314,5 @@ > > +/** > > + * Tells the compositor-side which texture to use (for example, as front buffer > > + * if there is several textures for double buffering) > > + */ > > +struct OpUseTexture { > > Can we remove this for now? Seeing as its not used and we discussed a > different approach for the long run. We do use it and my understanding was that in the long run we would have a OpUseTexture and OpUseComponentAlphaTexture message. OpUseTexture is currently the only way to do double buffering with new textures it is basically like a swap but OpSwapTexture was already taken.
Comment on attachment 781778 [details] [diff] [review] Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) Review of attachment 781778 [details] [diff] [review]: ----------------------------------------------------------------- OK, done. More nits, mostly. r=me. ::: gfx/layers/composite/ImageHost.cpp @@ +137,5 @@ > + fprintf(aFile, "%s", aPrefix); > + fprintf(aFile, aDumpHtml ? "<ul><li>TextureHost: " > + : "TextureHost: "); > + DumpTextureHost(aFile, mFrontBuffer); > + fprintf(aFile, aDumpHtml ? " </li></ul> " : " "); All these Dump methods should be ifdef something, DEBUG or MOZ_LAYERS_HAVE_LOG or something. @@ +208,5 @@ > +{ > + if (mDeprecatedTextureHost) { > + return mDeprecatedTextureHost->GetRenderState(); > + } > + return LayerRenderState(); Need to implement GetRenderState for the new ImageHost, I think. ::: gfx/layers/composite/ImageHost.h @@ +31,5 @@ > + const gfx::Point& aOffset, > + const gfx::Filter& aFilter, > + const gfx::Rect& aClipRect, > + const nsIntRegion* aVisibleRegion = nullptr, > + TiledLayerProperties* aLayerProperties = nullptr); MOZ_OVERRIDE ::: gfx/layers/composite/TextureHost.cpp @@ +73,5 @@ > +// implemented in TextureOGL.cpp > +TemporaryRef<TextureHost> CreateTextureHostOGL(uint64_t aID, > + const SurfaceDescriptor& aDesc, > + ISurfaceAllocator* aDeallocator, > + TextureFlags aFlags); Why do we need a decl in a cpp file? Shouldn't this be in a header file? @@ +381,5 @@ > + > +already_AddRefed<gfxImageSurface> > +BufferTextureHost::GetAsSurface() > +{ > + nsRefPtr<gfxImageSurface> result; whitespace ::: gfx/layers/opengl/TextureHostOGL.cpp @@ +150,5 @@ > + mTexImage->GetContentType() != gfx::ContentForFormat(aSurface->GetFormat())) { > + if (mAllowTiling) { > + // XXX - clarify the which size we want to use. Some use cases may > + // require the size of the destnation surface to be different from > + // the size of aSurface. typos in this para. sounds like we really want to sort this out @@ +243,5 @@ > +{ > + if (!gl()) { > + return; > + } > + // TODO[happy reviewer] is do we need to bind the texture here? please ping bjacob about this @@ +256,5 @@ > + > +bool > +SharedTextureSourceOGL::IsValid() const > +{ > + return gl() != nullptr; !!gl() or just gl() @@ +282,5 @@ > + > +SharedTextureHostOGL::~SharedTextureHostOGL() > +{ > + // If need to deallocate textures, call DeallocateSharedData() before > + // the destructor assert this, use a DebugOnly flag @@ +308,5 @@ > + return false; > + } > + > + GLenum wrapMode = LOCAL_GL_CLAMP_TO_EDGE; > + mTextureSource = new SharedTextureSourceOGL(nullptr, // Compositor why not use mCompositor? ::: gfx/layers/opengl/TextureHostOGL.h @@ +73,5 @@ > + virtual void BindTexture(GLenum aTextureUnit) = 0; > + > + /** > + * Unbind this texture > + * XXX - "Release" is confusing, should be called UnbindTexture instead. please do! @@ +252,5 @@ > +public: > + SharedTextureHostOGL(uint64_t aID, > + TextureFlags aFlags, > + gl::GLContext::SharedTextureShareType aShareType, > + gl::SharedTextureHandle aSharedhandle, indent
Attachment #781778 - Flags: review+
(In reply to Nick Cameron [:nrc] from comment #92) > Comment on attachment 781778 [details] [diff] [review] > Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) > > Review of attachment 781778 [details] [diff] [review]: > ----------------------------------------------------------------- > All these Dump methods should be ifdef something, DEBUG or > MOZ_LAYERS_HAVE_LOG or something. yes, I have a patch locally that adds all the MOZ_DUMP_PAINTING ifdef everywhere. I had to add it for all the Texture host and all the compositable classes for Dump and GetAsSurface methods. I am still not found of it because now if you want to add a TextureHost class you have to do 2 different builds to make sure it compiles (regardless of the platform stuff). > > + GLenum wrapMode = LOCAL_GL_CLAMP_TO_EDGE; > > + mTextureSource = new SharedTextureSourceOGL(nullptr, // Compositor > > why not use mCompositor? A mistake. This code used to be in a place where mCOmpositor wasn't set yet. Fixed it.
(In reply to Nicolas Silva [:nical] from comment #91) > (In reply to Nick Cameron [:nrc] from comment #89) > > Comment on attachment 781778 [details] [diff] [review] > > Part 4 - Texture refactoring + OGL backend (carrying nrc: r+) > > > > Review of attachment 781778 [details] [diff] [review]: > > ----------------------------------------------------------------- > > ::: gfx/layers/Effects.h > > @@ +7,5 @@ > > > #define MOZILLA_LAYERS_EFFECTS_H > > > > > > #include "mozilla/gfx/Matrix.h" > > > #include "mozilla/layers/Compositor.h" > > > +#include "mozilla/layers/TextureHost.h" > > > > Do we really need this? > > yes there is some inline code that use TextureSource in the header. The code > was explicitly marked inline so I didn't move it to the .cpp. > I guess we could move it to the .cpp > No its fine. I was just curious why you added the include, but I see it in a lot of other files too, so I guess it is just an include dependency change. In general I'm in favour of having small functions in the headers. I was a little sad to see a lot of the constructors move to the cpp files in this patch. But it is no biggie. > > ::: gfx/layers/client/CompositableClient.h > > @@ +109,5 @@ > > > */ > > > uint64_t GetAsyncID() const; > > > > > > + /** > > > + * Tells the Compositor to create a TextureHost for this TextureClient. > > > > presumably it keeps a reference to aClient or something too? Should add it > > to the comment, it reads a bit strange atm. > > Actually it doesn't keep a reference because CompositableClients usually > keep track of all of their buffers explicitly (by contrast of the host who > doesn't keep track of the back buffer and must follow what the client > dectates. > This method just creates and sends the OpAddTexture message. > Then it is probably worth adding _that_ to the comment, because the 'Add' in the name makes me think it is adding it to the object. > > @@ +133,4 @@ > > > protected: > > > + // The textures to destroy in the next transaction; > > > + std::vector<uint64_t> mTexturesToRemove; > > > + uint64_t mNextTextureID; > > > > why by id and not by pointer? Even if we just look the id up in the texture, > > it makes debugging easier if you keep a reference to the texture. > > Because it's the texture client's destructor that triggers adding IDs in > this array so we can't keep a pointer to the texture at this point. > OK. Maybe worth explaining in the comment that the texture client will already be dead and it is the texture host that will be destroyed in the next transaction. > > @@ +338,5 @@ > > > + * TextureHosts of a given CompositableHost always have different IDs. > > > + * TextureHosts of different CompositableHosts, may have the same ID. > > > + * Zero is always an invalid ID. > > > + */ > > > + uint64_t GetID() const { return mID; } > > > > is it useful for a texture host to know its ID? As opposed to only the > > compositable knowing it and mapping it to an actual texture host? > > the ID is only part of the poor man's PTexture. it is only used to dispatch > to the right Texture when an IPDL message is received. > So that sounds like the texture doesn't need it so you can remove mID and GetID()? > > > > @@ +356,5 @@ > > > + /** > > > + * Debug facility. > > > + * XXX - cool kids use Moz2D > > > + */ > > > + virtual already_AddRefed<gfxImageSurface> GetAsSurface() = 0; > > > > make it ifdef DEBUG > > This is used in MOZ_DUMP_PAINTING code, I am not sure whether we sometimes > do MOZ_DUMP_PAINTING on release builds, I need to check. > then ifdef MOZ_DUMP_PAINTING > > @@ +377,5 @@ > > > + */ > > > + virtual LayerRenderState GetRenderState() > > > + { > > > + // By default we return an empty render state, this should be overriden > > > + // by the TextureHost implementations that are used on B2G with Composer2D > > > > Composer2D is not actually tied to b2g, although that is the only platform > > that uses it at the moment. Only the Hwc implementation is b2g-specific. > > Composer2D is a general purpose API. > > I don't know anything about this API do we have some doc somewhere? > No :-( But it is not very complex. In reality it is only likely to be used on b2g, but good to be precise in the comments. > > > > ::: gfx/layers/ipc/LayerTransaction.ipdlh > > @@ +314,5 @@ > > > +/** > > > + * Tells the compositor-side which texture to use (for example, as front buffer > > > + * if there is several textures for double buffering) > > > + */ > > > +struct OpUseTexture { > > > > Can we remove this for now? Seeing as its not used and we discussed a > > different approach for the long run. > > We do use it and my understanding was that in the long run we would have a > OpUseTexture and OpUseComponentAlphaTexture message. OpUseTexture is > currently the only way to do double buffering with new textures it is > basically like a swap but OpSwapTexture was already taken. hmm, yes I remember now, sorry.
Attached patch Part 4.5 - Various cleanups (deleted) — Splinter Review
This fixes most of the recent nit comments about Part 4. There are one or two more involved things that I will look at when It's not 4am. Actually just asking for feedbacks because I am still not at ease with ifdefing the dump and GetAsSurface stuff. We need to either - make those NOT pure virtual and decide that they are just debug stuff - or keep them as pure virtual but not make us rebuild with and without MOZ_DUMP_PAINTING
Attachment #784750 - Flags: feedback?(ncameron)
Blocks: 901107
Blocks: 901219
Blocks: 901220
Blocks: 901224
Attachment #784750 - Flags: feedback?(ncameron) → feedback+
(In reply to Nicolas Silva [:nical] from comment #95) > Created attachment 784750 [details] [diff] [review] > Part 4.5 - Various cleanups > > This fixes most of the recent nit comments about Part 4. There are one or > two more involved things that I will look at when It's not 4am. > > Actually just asking for feedbacks because I am still not at ease with > ifdefing the dump and GetAsSurface stuff. We need to either > - make those NOT pure virtual and decide that they are just debug stuff > - or keep them as pure virtual but not make us rebuild with and without > MOZ_DUMP_PAINTING Just file a bug for this and mattwoodrow and I can argue about whether MOZ_DUMP_PAINTING should be replaced by DEBUG or not. In the mean time, I wouldn't worry about MOZ_DUMP_PAINTING builds being broken, if they are then it is really easy to fix as soon as someone notices and it affects neither testing or release builds, and it will give us some motivation to actually resolve the issue.
The landing in comment 88 seems to have caused crashes in libstagefright video playback on Sony Xperia devices - Bug 901420 according to a git bisect. Does this sound likely?
Depends on: 901420
Blocks: 902927
Depends on: 903071
Attached patch Part 7 - Gralloc changes (WIP) (obsolete) (deleted) — Splinter Review
state of gralloc textures, not working yet, I haven't put in place the proper deallocation/recycling mechanism yet
Attachment #780353 - Attachment is obsolete: true
Attachment #776495 - Flags: review?(matt.woodrow) → review+
Blocks: 903969
blocking-b2g: --- → koi?
Adding smoketest keyword mainly because this is causing many smoketests on b2g's trunk build to fail right now (e.g. crashes, hangs).
Attached patch Part 7 - Gralloc changes (WIP) (deleted) — Splinter Review
Quick update: I just got the video app working on unagi with the new textures. When I say working I mean I got a video playing, but I haven't double checked that memory is properly managed yet. The camera is broken on unagi at the moment so I haven't been able to test the new textures with it yet. Almost there.
Attachment #788190 - Attachment is obsolete: true
(In reply to Nicolas Silva [:nical] from comment #102) > Created attachment 792927 [details] [diff] [review] > Part 7 - Gralloc changes (WIP) > > Quick update: I just got the video app working on unagi with the new > textures. When I say working I mean I got a video playing, but I haven't > double checked that memory is properly managed yet. > The camera is broken on unagi at the moment so I haven't been able to test > the new textures with it yet. > > Almost there. Nice! Note, though, that others have claimed that the completion of this work would get the Unagi camera back to working, so if this is not the case, it's good to know early.
> Nice! Note, though, that others have claimed that the completion of this > work would get the Unagi camera back to working, so if this is not the case, > it's good to know early. To fix the Unagi camera back to working, Bug 907309 and Bug 901220 seems also necessary.
Uplift of Bug 879297 to master is also necessary. I just talked about it with jeff.
(In reply to Sotaro Ikeda [:sotaro] from comment #104) > > Nice! Note, though, that others have claimed that the completion of this > > work would get the Unagi camera back to working, so if this is not the case, > > it's good to know early. > > To fix the Unagi camera back to working, Bug 907309 and Bug 901220 seems > also necessary. To get it fully working, but just to be able to test, only bug 879297.
Just to clarify. The goal of this bug is to put in place a good foundation on top of which fixing a lot of the buffer life-time issues is easier, and to get rid of very hacky code that is fragile, buggy and very hard to reason about. This is not the silver bullet that will fix everything once landed, but it is meant to make fixing things easier (and in some case make it even possible).
I just tested the camera on the emulator and to works with the new textures too.
blocking-b2g: --- → koi?
No longer blocks: 880780
No longer blocks: 901219
No longer blocks: 901220
No longer blocks: 901107
No longer blocks: 871364
No longer blocks: 901224
Does this bug block 916264? keyboard typing is extremely laggy, and i'm trying to find the bug that will fix it. see screencast - 916264
(In reply to Tony Chung [:tchung] from comment #111) > Does this bug block 916264? keyboard typing is extremely laggy, and i'm > trying to find the bug that will fix it. see screencast - 916264 We won't have new textures on all types of layers in time for b2g 1.2 so this bug only affects video and non-accelerated canvas for now. It doesn't block 916264.
Depends on: 922007
Depends on: 938591
No longer depends on: 899044
No longer blocks: NewTextures
No longer blocks: PTexture
Keywords: smoketest
(In reply to Nicolas Silva [:nical] from comment #30) > There is more to come, sorry I forgot to add the [leave open] flag Looks like several patches landed since that comment added [leave open]. Is there still more coming, or can we close this bug now?
Flags: needinfo?(nical.bugzilla)
Let's close it.
Status: REOPENED → RESOLVED
Closed: 11 years ago11 years ago
Flags: needinfo?(nical.bugzilla)
Resolution: --- → FIXED
Whiteboard: [leave open]
Regressions: 1640901
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: