Give the texture cache one LRU cache per texture type, rather than one global LRU cache with a global limit
Categories
(Core :: Graphics: WebRender, task)
Tracking
()
Tracking | Status | |
---|---|---|
firefox86 | --- | fixed |
People
(Reporter: mstange, Assigned: mstange)
References
Details
Attachments
(3 files)
The texture cache currently has a global limit. This means that allocating an entry in a shared glyph texture can evict an entry from a shared image texture, and vice-versa.
It would probably be better to give each shared texture type its own limit, and maybe another separate limit for standalone textures.
As a result, we're more likely to evict an entry that frees up space that we need, rather than evicting an entry in an unrelated texture that already has tons of space to begin with.
Assignee | ||
Comment 1•4 years ago
|
||
The hardest part here (that I could find so far) seems to be the "handle re-use" behavior. At the moment, you can call update() with the same handle multiple times, and request different texture formats and sizes every time. In that case, the LRU cache just swaps out the old entry for the existing handle with the new entry, and that new entry might belong to a different texture. But that's fine, since it's still the same LRU cache: it's just one cheap call to lru_cache.replace_or_insert(...)
.
With separate LRU caches, we'll need to be able to "move" entries from one LRU to another, or, in other words, "remove" an entry from one LRU cache and add it to another. The hard part here is the removal of the entry in the old LRU cache. At the moment, there is only an API for removing "manual eviction" entries, but we'd need this ability for automatically evicted entries as well. Actually, this might be even trickier for manually managed texture cache entries; for those, removal is only possible with the entry's "strong handle", which is stored in the texture cache and is currently looked up with a linear search.
So, what is currently a simple call to replace_or_insert
might become the following beast: Check if the entry needs to move to a different LRU cache. If yes, check if it is manually or automatically managed; if manually, find the strong handle and remove it, and if automatically, remove the handle with a new method that allows removal of weak handles.
I'll think about this some more.
Assignee | ||
Comment 2•4 years ago
|
||
I think I've found a solution.
Assignee | ||
Comment 3•4 years ago
|
||
When I wrote this patch, I thought that it would simplify the next patch in this
series, but I think it didn't make much of a difference in the end.
I still think this patch improves things and is worth taking, though.
Assignee | ||
Comment 4•4 years ago
|
||
All LRU partitions use the same freelist to store the entries, and only have
separate LRU indexes. The shared freelist makes it easier to transfer an entry
from one LRU partition to another. If instead we used different LRUCache
instances, moving entries between partitions would be cumbersome because we
would need to look up the strong handle for the entry so that we could remove it
from the old freelist.
Depends on D102121
Assignee | ||
Comment 5•4 years ago
|
||
Depends on D102122
Assignee | ||
Comment 6•4 years ago
|
||
Assignee | ||
Comment 7•4 years ago
|
||
Comment 9•4 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/811280bd655e
https://hg.mozilla.org/mozilla-central/rev/9e478f71ac09
https://hg.mozilla.org/mozilla-central/rev/294f3ea4ecc2
Comment 10•4 years ago
|
||
== Change summary for alert #28520 (as of Fri, 22 Jan 2021 11:26:18 GMT) ==
Improvements:
Ratio | Suite | Test | Platform | Options | Absolute values (old vs new) |
---|---|---|---|---|---|
3% | Resident Memory | linux1804-64-shippable-qr | 1,111,524,107.31 -> 1,076,654,974.82 | ||
3% | Heap Unclassified | linux1804-64-shippable-qr | tp6 | 233,126,590.50 -> 227,134,265.45 | |
2% | Resident Memory | linux1804-64-shippable-qr | 1,102,848,870.69 -> 1,082,966,008.69 |
For up to date results, see: https://treeherder.mozilla.org/perfherder/alerts?id=28520
Description
•