Closed
Bug 825931
Opened 12 years ago
Closed 12 years ago
[layers-refactoring] sort out texture serialization / deserialization
Categories
(Core :: Graphics: Layers, defect)
Core
Graphics: Layers
Tracking
()
RESOLVED
FIXED
People
(Reporter: nical, Assigned: nical)
References
Details
(Whiteboard: [metro-gfx])
In the new layers architecture deserialization logic is handled within TextureHost sub-classes. TextureHost is instanciated early, in some cases before we know the actual texture type, which makes us use slow code paths.
We need to implement one (and only one) TextureHost class per compositor backend that picks the right deserializaton strategy as texture come (similar to what legacy layers do).
This means that the enum TextureHostType is not useful anymore.
Comment 1•12 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #0)
> In the new layers architecture deserialization logic is handled within
> TextureHost sub-classes. TextureHost is instanciated early, in some cases
> before we know the actual texture type, which makes us use slow code paths.
Could you give some more detail about what is happening here? Why does it have to be created early? Could we change texture host later on after starting on the safest path?
Comment 2•12 years ago
|
||
Should we think about trying to land without using the Texture protocol and piggy backing on the Layers protocol for now and do the Texture protocol in a second wave? Would that work here?
Assignee | ||
Comment 3•12 years ago
|
||
The way it works at the moment, TextureClient/Host pairs are created roughly at the same time as their layers, and the code that triggers this doesn't know about what type of texture will be sent. For an ImageLayer, we know know that the data will be a YCbCrImage when we receive the first frame.
Once the TextureHost is created it's awkward to change it because both the BufferHost and TextureParent hold a reference to it.
We discussed about it with Bas and the simplest solutions are to either move the deserialization logic into an other class that would be a component of TextureHost and could be changed whenever needed, or have TextureHost[OGL/D3D/...] handle every cases in UpdateTextureImpl.
Since deserializing a SharedImage is stateless, I'm thinking about implementing a bunch of deserializeFoo(const Foo& image, GLContext* aGL, nsTArray<RefPtr<Bindabletexture>>& output) functions separately with "Foo" being SurfaceDescriptor, SharedTexture, YCbCrImage, etc.
TextureHostOGL would just pick the right logic depending on the input image type.
It's not entirely satisfying but it would make the current problem of pairing TextureClient/Host non-existent.
I'd much rather do this than remove the PTexture protocol.
I'm not against coming back with another solution after landing, I think this is the fastest way to get things to work properly.
Comment 4•12 years ago
|
||
Thanks for the explanation. I believe you that this is probably the best solution, but some more questions for my understanding. Are there situations where the kind of texture memory we are using affects what kind of deserialisation we need to do? E.g., if we are using ImageBridge. Can all of this be encapsulated nicely in a DeserializeFoo routine?
(In reply to Nicolas Silva [:nical] from comment #3)
> The way it works at the moment, TextureClient/Host pairs are created roughly
> at the same time as their layers, and the code that triggers this doesn't
> know about what type of texture will be sent. For an ImageLayer, we know
> know that the data will be a YCbCrImage when we receive the first frame.
> Once the TextureHost is created it's awkward to change it because both the
> BufferHost and TextureParent hold a reference to it.
Why can't the code that creates the TextureHost know what kind of textures will be used?
Could we add a layer of indirection to solve the updating problem? As an example design, could the BufferHost have a ref to a TextureSource which has a (weak) ref to the TextureHost, the host has a strong ref to the Source. The TextureParent has a ref to the TextureHost. The host is responsible only for receiving data and the source only for rendering it. If the texture host needs to be changed it can be done by the TextureParent and the host updates its Source to point to the new Host.
I think we need to be able to change TextureHosts anyway when, for example, the size of a texture changes and we need to fallback to a slower path because there is not enough 'fast' memory.
>
> We discussed about it with Bas and the simplest solutions are to either move
> the deserialization logic into an other class that would be a component of
> TextureHost and could be changed whenever needed, or have
> TextureHost[OGL/D3D/...] handle every cases in UpdateTextureImpl.
I like having different classes for each kind of Update. Having just one OGL Host class feels like bad design - too many if statement, too much dynamic checking of a property which is mostly static.
>
> Since deserializing a SharedImage is stateless, I'm thinking about
> implementing a bunch of deserializeFoo(const Foo& image, GLContext* aGL,
> nsTArray<RefPtr<Bindabletexture>>& output) functions separately with "Foo"
> being SurfaceDescriptor, SharedTexture, YCbCrImage, etc.
> TextureHostOGL would just pick the right logic depending on the input image
> type.
Something like this might be nice. If deserialising is really stateless, we could have pretty much flyweight TextureHosts with functional deserialisation routines. And store all the state in the TextureSource.
> It's not entirely satisfying but it would make the current problem of
> pairing TextureClient/Host non-existent.
>
I don't think the pairing is much of a problem, and really this just moves the problem to when we receive texture data, rather than when we create the layer, and makes the path followed less explicit.
> I'd much rather do this than remove the PTexture protocol.
>
I only suggest doing this temporarily. PTexture is definitely a good idea. And almost as an orthogonal thing to the issues here.
> I'm not against coming back with another solution after landing, I think
> this is the fastest way to get things to work properly.
Threading out the different paths through the update code into separate classes was one of the more difficult parts of the refactoring so far. I worry that the more we make substantial changes like this the longer the whole thing will take and the more opportunity there is for serious bugs. It may be that it is the lesser of evils though.
Assignee | ||
Comment 5•12 years ago
|
||
(In reply to Nick Cameron [:nrc] from comment #4)
> Thanks for the explanation. I believe you that this is probably the best
> solution, but some more questions for my understanding. Are there situations
> where the kind of texture memory we are using affects what kind of
> deserialisation we need to do? E.g., if we are using ImageBridge. Can all of
> this be encapsulated nicely in a DeserializeFoo routine?
For ImageBridge it's good. It shouldn't have aything to do with texture backends, so I modified the ImageBridge code to accept any TextureHost (the update just comes from an ImageContainerParent instead of a TetxureParent when using ImageBridge).
>
>
> (In reply to Nicolas Silva [:nical] from comment #3)
> > The way it works at the moment, TextureClient/Host pairs are created roughly
> > at the same time as their layers, and the code that triggers this doesn't
> > know about what type of texture will be sent. For an ImageLayer, we know
> > know that the data will be a YCbCrImage when we receive the first frame.
> > Once the TextureHost is created it's awkward to change it because both the
> > BufferHost and TextureParent hold a reference to it.
>
> Why can't the code that creates the TextureHost know what kind of textures
> will be used?
For instance with ImageBridge we can receive YCbCr data and in some case RGB data (since a commit that landed after the last rebase). It's kinda tricky to know which we have in the layers code before the first frame has been sent.
>
> Could we add a layer of indirection to solve the updating problem? As an
> example design, could the BufferHost have a ref to a TextureSource which has
> a (weak) ref to the TextureHost, the host has a strong ref to the Source.
> The TextureParent has a ref to the TextureHost. The host is responsible only
> for receiving data and the source only for rendering it. If the texture host
> needs to be changed it can be done by the TextureParent and the host updates
> its Source to point to the new Host.
I am not very fond of bi-directional references in general. We can get away with a component based aproach, see below.
>
> I think we need to be able to change TextureHosts anyway when, for example,
> the size of a texture changes and we need to fallback to a slower path
> because there is not enough 'fast' memory.
>
> >
> > We discussed about it with Bas and the simplest solutions are to either move
> > the deserialization logic into an other class that would be a component of
> > TextureHost and could be changed whenever needed, or have
> > TextureHost[OGL/D3D/...] handle every cases in UpdateTextureImpl.
>
> I like having different classes for each kind of Update. Having just one OGL
> Host class feels like bad design - too many if statement, too much dynamic
> checking of a property which is mostly static.
>
This can still be done by making the deserialization and the storage parts components of TextureHost:
What I am trying out now is to Have a TextureHostStorage interface which provides access to texture data, a TextureHostDeserializer that would only read a SharedImage and write in a TextureStorage (the only "state" would be the output storage) and eventually can replace the storgae by another kind if needed (cf, low memory scenario you mentionned or when we don't know the image type before reception).
both these would be components of TexureHostOGL.
So basically this just shifts some of the responsabilities of TextureHost classes into components of TextureHost, and let us switch strategies at runtime without the need of reference fixup.
> >
> > Since deserializing a SharedImage is stateless, I'm thinking about
> > implementing a bunch of deserializeFoo(const Foo& image, GLContext* aGL,
> > nsTArray<RefPtr<Bindabletexture>>& output) functions separately with "Foo"
> > being SurfaceDescriptor, SharedTexture, YCbCrImage, etc.
> > TextureHostOGL would just pick the right logic depending on the input image
> > type.
>
> Something like this might be nice. If deserialising is really stateless, we
> could have pretty much flyweight TextureHosts with functional
> deserialisation routines. And store all the state in the TextureSource.
I'd rather not store gl resources in TextureSources because I want TextureHost to own all the gl resources that might need to be released at once during the shutdown sequence (or to own the TextureStorage that owns the resources). The rest of the states (wrap mode, texCorrds, etc.) can get in TextureSource, no problem.
TextureSourceOGL is becoming less and less GL-ish as I move its responsabilities into components. Later, if this works well we may want to do this at the cross backend TextureHost level (provided D3D, Basic and GL roughly need the same BindableTexture concept for texture sources, that is a BindTexture method, a wrap mode a Size, and texture coordinates. I don't know.)
>
> > It's not entirely satisfying but it would make the current problem of
> > pairing TextureClient/Host non-existent.
> >
>
> I don't think the pairing is much of a problem, and really this just moves
> the problem to when we receive texture data, rather than when we create the
> layer, and makes the path followed less explicit.
Perhaps pairing is not the right word but I mean the problem is picking the right deserialization/storage strategy, and this is much easier to do when we receive actual image data, rather than when we create the layer, let alone potentially changing strategy when size changes as you said.
>
> > I'd much rather do this than remove the PTexture protocol.
> >
>
> I only suggest doing this temporarily. PTexture is definitely a good idea.
> And almost as an orthogonal thing to the issues here.
>
> > I'm not against coming back with another solution after landing, I think
> > this is the fastest way to get things to work properly.
>
> Threading out the different paths through the update code into separate
> classes was one of the more difficult parts of the refactoring so far. I
> worry that the more we make substantial changes like this the longer the
> whole thing will take and the more opportunity there is for serious bugs. It
> may be that it is the lesser of evils though.
If not having the code in separate classes is what worries you, I suppose the component approach with separate classes for deserialization/storage (just not TextureHost) would be a good compromise, then ? I was going to keep them in separate functions otherwise anyway.
The most important for me here is that we can change strategies dynamically without fixing up references and that we keep a strict ownership model of gl resources.
Comment 6•12 years ago
|
||
OK, that sounds good. I am not totally desperate to split the different strategies into different classes (separate functions is almost as good, better if we really think that switching should be very dynamic), if avoiding that gives a better design then we should go for it.
I prefer to decompose into many classes 'horizontally' (i.e., each path gets a separate class) rather than 'vertically' (i.e., each stage of the path gets a different class). I think that this is better reuse because we are more likely to add another path in the future (e.g., another kind of serialisation/texture memory) than we are to add another stage to the path (I can't even think of an e.g.).
Your component idea sounds nice in that each class has very strictly defined responsibilities. I worry a little bit that we are increasing the number of classes too much - we started with layers and already have layers, buffers (host & client), textures (host, client, source, bindable), and so forth. We need to be careful not to introduce too much complexity, even in the name of cleanliness.
I don't think having a few extra references here is too bad, as long as we don't have cycles of owning references, then some non-owning references (even if we make a cycle) are not so bad. After all these are very closely related objects, I would expect fairly tight coupling.
It is interesting that Texture Source might be able to be platform independent. I think dividing up responsibilities between host and source is a tricky problem.
On a tangent, I am starting to think (again) that since the common case is for textures to have a single source and a single host, we should support this in a single class, which means using inheritance rather than composition for that (also creating a texture source every time we render a host is kind of annoying). Not sure if that is possible without some horrific diamond-ey hierarchy though.
Assignee | ||
Comment 7•12 years ago
|
||
Things have changed a lot recently in this area, and I didn't update bugzilla, but basically we settled for a way to swap TextureHosts, that is closer to what we have now (doesn't involve component based approach I was suggesting in this bug, etc)
I am still working on this
Updated•12 years ago
|
Whiteboard: [metro-gfx]
Updated•12 years ago
|
Blocks: metro-omtc
Assignee | ||
Comment 8•12 years ago
|
||
Texture host switching works on the gfx branch now
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Updated•12 years ago
|
No longer blocks: metro-omtc
You need to log in
before you can comment on or make changes to this bug.
Description
•