Closed Bug 683876 Opened 13 years ago Closed 7 years ago

Use pre-calculated index statistics on places.sqlite

Categories

(Toolkit :: Places, defect, P5)

defect

Tracking

()

RESOLVED WONTFIX

People

(Reporter: mak, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(1 file, 1 obsolete file)

we currently run ANALYZE on specific expiration events, but in some case this may not be enough, especially for those users that rarely leave the browser idle.
This may cause problems since SQLite relies on these data to plan queries.
Another possible improvement drh suggested us is to prefill sqlite_stat1 with data from a large database while waiting for the first analyze to run.
Blocks: 686025
Whiteboard: [places-next-wanted]
I think I'm going to evaluate the exact opposite option, thus never run  it.

This for some reasons:
- Our queries are built out of a certain data distribution, analyze helps the query planner to do a slightly better job when we are in exotic distributions, by usually going the scan table path rather than using an index, since the latter requires a btree.  But, if the data is outdated, doing a wrong scan on places or visits is catastrophic.  The gains may be in the order of milliseconds, but the defects are in the order of seconds. (examples are bug 682676 and bug 686025)
- analyze invalidates most prepared statements causing 2 effects: the former is that we spend again time repreparing, thus killing some of the perf benefits, the latter is that repreparing the statement touches the mutex increasing contention possibility.
- Our queries and indices are already built with lot of caring about the best query plan, so it's really rare that analyze may take a better path.  By enforcing stats, we enfore queries to be run the path they were built for.  More often we saw the opposite effect, where outdated stats took the worst path.
- Some users, and especially mobile users, may hardly hit idle that is where we analyze.
- We may always miss a required analyze, it needs attention since you may analyze with an empty bookmarks table and forget to rerun it after a bookmarks import, result is that the query planner thinks the table is almost empty, when it may instead have thousands of entries.
- Users or add-ons may run ANALYZE outside, but we only run it on some indices, thus all the other stats would stay obsolete.

On the other side current SQLite needs some stats or may take bogus paths.
For sure we have to populate sqlite_stat1 with precalculated data at database creation, since otherwise new profiles may behave wrongly, so we may just stick with these precalculated stats and build queries on top of them. The pre-population of sqlite_stat1 is absolutely allowed and drh suggested it to me for new databases.

The risk here is that we lose some adaptivity, where for users with fancy distributions, we may have some query using an index when it may have used a slightly faster linear scan table.
Assignee: nobody → mak77
Status: NEW → ASSIGNED
> - Some users, and especially mobile users, may hardly hit idle that is where we analyze.

Although I think this is reasonable, don't we run other important jobs during idle as well?  Should we file separate bugs on them?  From bug 686025 comment 78, it sounds like something may be wrong with the idle timer in general.
I'm now going to check if Firefox 6 or 7 had some idle bugs, if I find anything will file a report.
The idle-on-mobile issue is absolutely something to be figured, we run "important" (but not lifesaving) stuff on idle and a user hardly hitting idle may run into small issues (from the Places point of view those may be having a less clean database and less precise frecency data, other components may have worse issues). It's not a simple to solve problem though, since we run things on idle exactly because those are costly operations. Maybe we should have a system service for maintenance tasks rather relying on browser idle.
Attached patch patch v1.0 (obsolete) (deleted) — — Splinter Review
This is the idea: on schema change and on maintenance we replace sqlite_stat1 content with pre-calculated data. Statistics are the same used to build statements, which have some expected distribution of data based on the structure.

Notice that this will require a schema version change in the same version it will land, to run at least once, I've not bumped the schema here since there are other bugs that plan to do a schema bump (inline autocomplete and favicons guid) and we may just ensure they land in the same version.

I think sdwilsh is the best person to evaluate this (he also added analyze support in the first place)
Attachment #563393 - Flags: review?(sdwilsh)
Summary: Run ANALYZE more often, ensure a maximum timeframe between runs → Use pre-calculated index statistics on places.sqlite
Actually, see https://bugzilla.mozilla.org/show_bug.cgi?id=686025#c89 for an explanation on why we were not running analyze often enough. But even if we fix it to run on each idle, that may not be enough for Mobile. Thus I still think this is the best approach.
Depends on: 690354
Attached patch patch v1.1 (deleted) — — Splinter Review
Unbitrot on top of the patch in bug 690354
Attachment #563393 - Attachment is obsolete: true
Attachment #563393 - Flags: review?(sdwilsh)
Attachment #563728 - Flags: review?(sdwilsh)
note to myself: s/PR_FALSE/false/
If you have a timeline for landing this, let me know so I can make sure favicons guids gets into the same schema bump...
(In reply to Richard Newman [:rnewman] from comment #8)
> If you have a timeline for landing this, let me know so I can make sure
> favicons guids gets into the same schema bump...

that was the idea, btw I have to collect some perf data to evaluate the impact of the change first.
I'm concerned about this: we landed this originally because we wanted to help edge case people.  If we don't care about them anymore, why don't we just backout our use of this code (and drop `sqlite_stat1`)?
We landed it to drive sqlite and avoid the bad choices that 3.6.x planner was doing, I don't think this helps any edge case by an interesting amount, instead this makes easy to wrongly handle common cases.
When you wrongly handle an edge case you may lose a couple milliseconds (time to build a small btree), but when you wrongly handle a common case you lose seconds (Time to do a table scan on a large table).
Once you create sqlite_stat1 you can't drop it (you can empty it, with same results), but regardless I'd prefer telling sqlite what to do through these stats, rather than relying on the hope it will do the right thing.

Btw, as said I'll measure the impact on fancy distributions and report it.
Blocks: PlacesJank
Comment on attachment 563728 [details] [diff] [review]
patch v1.1

clearing review, while waiting to fetch actual numbers.
Attachment #563728 - Flags: review?(sdwilsh)
see also https://bugzilla.mozilla.org/show_bug.cgi?id=702889#c3 for a really nice agreement on this :)
FWIW, baking the query plan into the query itself using explicit index hints (option 3 in that comment) is a lot less scary to me than baking the query plan into hardcoded table statistics.

It's not using the same query plan everywhere that I'm afraid of so much as the necessity to keep the statistics table up to date and correct as changes are made to the schema, plus the difficulty of verifying that the statistics are correct.
(In reply to Justin Lebar [:jlebar] from comment #14)
> It's not using the same query plan everywhere that I'm afraid of so much as
> the necessity to keep the statistics table up to date and correct as changes
> are made to the schema, plus the difficulty of verifying that the statistics
> are correct.

The fact is that, adding '+' to each query, or populating the statistics table, is the same thing from the maintenance point of view.
And here's why:
- when you change the schema you have to go through each query and check if the '+' is still applying correctly.
- when we update the SQLite engine and the upgrade includes changes to the optimizer, again you have to go through each query and check that the '+' still applies correctly.

To make the verification you check the query EXPLAIN (and the simpler plan). This is something you have to do exactly the same both if you want to fill up the stats table, or if you want to add the indexes exclusions.

I don't have a strong preference going any of the 2 ways, but the stats table has a couple advantages:
1. we have telemetry data telling us the worst cases in the wild
2. we can leverage the SQLite optimizer capabilities to improve the queries themselves. By this I mean sometimes you may not notice a possible improvement, and you'd just blindly put a '+', while checking the EXPLAIN with stats you may be surprised noticing you can make the query better, and actually do that.

That's why I went this suggestion, but I'm absolutely open to discussion.
obvously the '+' approach has the 'locality' advantage, on its side.
Sorry, we're talking past each other in this bug and dev.platform.  Let's discuss in this bug -- I doubt most people care about these details.  :)

> To make the verification you check the query EXPLAIN (and the simpler plan). This is something you 
> have to do exactly the same both if you want to fill up the stats table, or if you want to add the 
> indexes exclusions.

I just suggested in dev.platform that if it's really important we run exactly the same plan always, then we should have automated tests that EXPLAIN doesn't change.  Then I don't care what method we use!
the "dirty" talos tests were supposed to do something similar, but actually they use pretty much random data and don't leverage enough functionality.
We should have scripts that starting from telemetry data build an average and worst case databases and runs a fixed sets of queries on them, reporting the results.
> We should have scripts that starting from telemetry data build an average and worst case databases 
> and runs a fixed sets of queries on them

If you have a reasonable set of regression tests for this, then I'll shut up and be happy.  :)  But it's really important that these tests have full coverage of every query and are updated whenever we add queries.  It's not sufficient to use a fixed set and then forget about it.
(In reply to Justin Lebar [:jlebar] from comment #19)
> But it's really important that these tests have full
> coverage of every query and are updated whenever we add queries.  It's not
> sufficient to use a fixed set and then forget about it.

If I should be honest, your first request is just impossible to realize, for the nature itself of dynamic queries. You won't be ever able to test any possible query and data distribution. You can go near.
But when we implement the thing, we should really not forget about updating it. It will also save our time by automating some part of what we do manually.

The point is that, paying this price now will stop a fix that may save lots of expensive analyze to our users, and atm I don't have the time to build a test harness, there is some large refactorings to do to move on.  But I'm all in favor of that, we just need resources to do that.
If you write queries that must run with some plan, but you can't test it, and you have no way to tell if and when this regresses, and we know that things like this have regressed in the past, and we know when just *one* query regresses, it can cause a disaster...how can we possibly accept a state of affairs where we don't have regression tests for this?

Maybe it's the best cost/benefit tradeoff right now.  I'd just hate to see another "Firefox hangs periodically" bug due to our use of SQLite.  Those bugs are very serious, and without regression tests, how do we have confidence we're not going to cause another bug like it?

We have the ear of SQLite developers.  Would you be willing to talk with them about making ANALYZE run faster (or even run automatically)?  If we agree that statistics gathered by the DB itself are best, if they're fresh, then maybe we should at least see if the problems with ANALYZE can be fixed.  If that's no good, it's also possible that they could extend the language so you can explicitly specify an index to use, rather than specify indices *not* to use; other SQL dialects have this.

Maybe you should do this bug and follow up about ANALYZE.  I'd be OK with it if that actually happened; if we didn't say that this bug is good enough and drop the issue.

(There are all kinds of things you could do to ensure that the statistics are relatively fresh.  For example, you could watch how long queries take to execute, and schedule an ANALYZE whenever a query takes too long.   You wouldn't even have to analyze the whole DB in this case, just the tables touched by your query.)
> If I should be honest, your first request is just impossible to realize, for the nature itself of 
> dynamic queries. You won't be ever able to test any possible query and data distribution. You can go 
> near.

This is exactly my point.  The whole purpose of ANALYZE is so you don't have to worry about testing all queries and all distributions.  But therefore if you don't have ANALYZE, you *do* have to worry about this.  If it's impossible to test all queries, then it's impossible to know whether our code is correct, which usually means we should consider another approach.

It seems like you're trying to have it both ways -- no ANALYZE, and no checking that we're using sane query plans -- and I'm concerned this is a path to the Dark Side.

But, hey, this is a testable hypothesis.  Do this bug without any tests, then see if we regress something at some point in the future.  I think a regression at some point is likely, but I could be wrong!
(In reply to Justin Lebar [:jlebar] from comment #21)
> how can we possibly accept a
> state of affairs where we don't have regression tests for this?

Do we have regression testing for any other thing that regressed in the past? Should I start the list? Mine is not a good argument (2 wrongs don't make a right), but as well it is not to block fixes on creating a new testing harness, imo.

> Maybe it's the best cost/benefit tradeoff right now.  I'd just hate to see
> another "Firefox hangs periodically" bug due to our use of SQLite.

It may happen whatever thing we do, code has bugs, tests can't have 100% coverage. And it may happen for a lot of things that are not SQLite related, as well.

> We have the ear of SQLite developers.  Would you be willing to talk with
> them about making ANALYZE run faster (or even run automatically)?  If we
> agree that statistics gathered by the DB itself are best, if they're fresh,
> then maybe we should at least see if the problems with ANALYZE can be fixed.

The problem with analyze is how it is designed to work, I don't see any way to fix that. I also don't see why we concentrate so much on this, Places is the only database who tried to use it, do you see issues with the other databases? The experiment failed, we learned a lot from it.

I'll try to ping SQLite team regarding it and see if they have plans, but I suspect we just did it wrong. When we started seeing analyze issues Richard suggested me to put precalculated stats in the stats table when we create the database, to at least avoid slowdowns on initial changes, so this is just what I'm suggesting here. I didn't even know that was possible.

> For example, you could watch how long queries take to
> execute, and schedule an ANALYZE whenever a query takes too long.

Then it would be too late, we'd have a long freeze and then we'd fix it.  We want to prevent freezes, not to fix them later. If we freeze in front of the user, we lost already.

Btw, database design is hard, and you have to put some faith on your DBAs or DBOs, you can't just hope the engine will do the best thing. Often it won't.
And obviously you can rely on tests, being aware writing those tests is more expensive the more you want coverage.
Provided I agree we want this kind of testing, who is going to write them? Can we assign someone to the task?

(In reply to Justin Lebar [:jlebar] from comment #22)
> This is exactly my point.  The whole purpose of ANALYZE is so you don't have
> to worry about testing all queries and all distributions.

I fear you are doing the same reasoning we did originally, that finally brought to that bad hang. Don't think analyze can solve our problems or that it can save broken queries. If you write a broken query you are hosed whatever you do.

> It seems like you're trying to have it both ways -- no ANALYZE, and no
> checking that we're using sane query plans -- and I'm concerned this is a
> path to the Dark Side.

All of our queries have lots of hours of testing behind, using the instruments SQLite gives us, the dark side is hoping something will come from the sky and do that work for you. Database design just doesn't work like that, and analyze is not that instrument. Our query was perfectly optimized, never gave a single problem, a missing analyze killed it.
Do you really think we write a query hoping it will work and release it?
(In reply to Marco Bonardo [:mak] from comment #15)
> - when we update the SQLite engine and the upgrade includes changes to the
> optimizer, again you have to go through each query and check that the '+'
> still applies correctly.
In practice, we've never done this.  I'm not convinced we'd be able to in the future.
(In reply to Shawn Wilsher :sdwilsh from comment #24)
> (In reply to Marco Bonardo [:mak] from comment #15)
> > - when we update the SQLite engine and the upgrade includes changes to the
> > optimizer, again you have to go through each query and check that the '+'
> > still applies correctly.
> In practice, we've never done this.  I'm not convinced we'd be able to in
> the future.

Well, not completely true, in Places we did at each release where the changelog was showing changes to the query optimizer, like 3.6.x. Still, even doing that, we missed a case, it's an error-prone and developer-time expensive path. This is where the above suggested automatic queries testing would be extremely useful, even if not perfect.

To sum-up the discussion we have 3 possible approaches:

* Mark queries with index exclusion and disable analyze.
 - PRO: locality, the path is hardcoded in the query
 - PRO: the queries behavior is predictable at design time
 - PRO: cheap, since nothing has to be done on the user's side
 - CON: error-prone, may miss some statement or expression cases
 - CON: developer-time, check each query on schema, behavior or SQLite changes

* analyze
 - PRO: always best query path, if the query is not plain wrong
 - PRO: global, valid for any new and old statement
 - PRO/CON: it may "mask" cases where there is space for major improvements
 - CON: user-time, it may take lots of ms and has to be run often
 - CON: hard to find the "right time to run", not fresh stats may hurt badly
 - CON: the query behavior is unpredictable at design time

* pre-calculated stats
 - PRO: based on telemetry data so the worst known case is handled
 - PRO: global, valid for any new and old statement
 - PRO: cheap, since has to run just once
 - PRO: the queries behavior is predictable at design time
 - CON: lower distributions may build some useless memory btrees
 - CON: we may underestimate the worst case, telemetry isn't perfect
 - CON: developer-time, check each query on schema or behavior changes

did I miss something?
> did I miss something?

These are pretty good.  I'd add

> * Mark queries with index exclusion and disable analyze.
CON: May not be able to get SQLite to take the precise plan you want.  (You can exclude indices, but there's apparently no way to force SQLite to *use* an index, right?)

> * analyze
PRO: Doesn't require automated testing (assuming we believe that SQLite knows how to choose a good query plan if it has good data, which it seems like we do)
PRO: Can avoid repeated worst-case behavior via introspection.  (If a query takes too long, analyze the relevant tables.  The next time we run the query, it should be faster.) (1)
CON: May require modifications to SQLite to be fast enough.

And two cons which apply to both of these:

> * Mark queries with index exclusion and disable analyze.
> * pre-calculated stats
 - CON: If a worst-case path is hit, the query will continue to use that plan until we release a software update.  (2)
 - CON: We don't have the resources to automatically test that query plans don't regress, so we'll have to rely on signals gathered from telemetry (which we may also not have the resources or wherewithal to watch carefully enough).  (3)
It's (1) compared to (2), which, *in the absence of automated tests* (which seems to be the current realistic proposal), makes ANALYZE so much more appealing to me.  Assuming we don't and won't have comprehensive query plan tests, I view all three options as essentially unpredictable.  But at least ANALYZE can fix itself.  I'm perfectly happy to accept a query running slowly once as the cost.

Of course, ANALYZE may be too slow in its current form.  But if we agreed that it would be nice to use ANALYZE if it were faster, we could see if our SQLite contacts can do anything about speeding it up.
(In reply to Justin Lebar [:jlebar] from comment #26)
> CON: May not be able to get SQLite to take the precise plan you want.  (You
> can exclude indices, but there's apparently no way to force SQLite to *use*
> an index, right?)

right!

> And two cons which apply to both of these:
> 
> > * Mark queries with index exclusion and disable analyze.
> > * pre-calculated stats
>  - CON: If a worst-case path is hit, the query will continue to use that
> plan until we release a software update.  (2)

Actually, this applies to all of three, indeed we are using analyze, but we were not calling it often enough. We could fix it only by releasing a new version.
If you hit a bad path you are hosed in all three cases, unless you find the perfect way to call analyze, that is really hard, since the more you use it the more you hit user with perf hog, the less you use it, the more you risk hitting bad paths.

>  - CON: We don't have the resources to automatically test that query plans
> don't regress, so we'll have to rely on signals gathered from telemetry
> (which we may also not have the resources or wherewithal to watch carefully
> enough).  (3)

This may apply to (2), we would need the same testing to ensure we are calling analyze at the right time.
(In reply to Justin Lebar [:jlebar] from comment #27)
> I'm perfectly happy to accept a query running
> slowly once as the cost.

FWIW, slowly may mean your UI is hanged for 10 seconds, personally I'd not accept this.
> Actually, this applies to all of three, indeed we are using analyze, but we were not calling it 
> often enough. We could fix it only by releasing a new version.

If we were using analyze and calling it whenever a query runs slowly, as I suggest, then this would not happen, correct?

> FWIW, slowly may mean your UI is hanged for 10 seconds, personally I'd not accept this.

The choice is between a query hanging the UI for 10s *once*, and a query hanging the UI for 10s *every time it runs*.

The failure mode of the non-analyze solutions involves queries running slowly and hanging the UI repeatedly until we release a software update, while the failure mode of ANALYZE involves queries running slowly exactly once.  You're comparing the failure mode of ANALYZE (10s hang) to the *non-failure mode* of non-ANALYZE (no hang), which is not meaningful.

The premise behind the hardcoded statistics is that they need only be very roughly accurate in order to avoid worst-case behavior.  If we accept that premise, then we won't need to run ANALYZE particularly often in order to get correct query plans.
(In reply to Justin Lebar [:jlebar] from comment #30)
> > Actually, this applies to all of three, indeed we are using analyze, but we were not calling it 
> > often enough. We could fix it only by releasing a new version.
> 
> If we were using analyze and calling it whenever a query runs slowly, as I
> suggest, then this would not happen, correct?

We were using analyze, but updating it only when we thought things were changing largely enough. If we had used your retrospection tactic, the UI would have still hanged for a lot of seconds, it's likely the user would have killed firefox.exe before we could even decide to run analyze. And our detection algorithm may have bugs.
Likely now we should have hang detection and reporting and that may be a first kind of protection.

Actually we could do an half-way. We may put into the db pre-calculated stats on creation, as SQLite team suggested, and use this sort of retrospection to decide if they need to be updated, reporting with telemetry each time this happens. The problem is finding good enough hooks so that we don't miss any possible slow call. Should likely be integrated in Storage internal methods.
It should have a way to compare 2 runs of the same statement, that means should store previous data somewhere on disk, and this is additional IO on each query. Comparing just session data may not be enough, you may have a query that runs fast, close the browser, on opening do some large operation like an import, then the first time your run the same query it would be slow, but you don't know it is slowER than it was. Using a threshold is again wrong, since some queries ARE slow (think of VACUUM for example) but they don't imply an analyze is needed.
We may maybe annotate statements to recognize those misbehaving, but it's again error-prone and system dependent (how can I tell if a query is slow when it may run on an old Intel Celeron that sucks compared to my i7?).

> The premise behind the hardcoded statistics is that they need only be very
> roughly accurate in order to avoid worst-case behavior.  If we accept that
> premise, then we won't need to run ANALYZE particularly often in order to
> get correct query plans.

This is correct, we don't need extremely precise stats, we need stats that drive queries how they were designed to run. If we accept that premise, you see that running analyze adds few benefits to pre-built data. It adds some sort of "protection" around our mistakes, but we don't know when we can stop running it and when we have to restart running it, failing to correctly predict these points puts us on fire.
Let me emphasize again that, in the presence of comprehensive tests, pre-calculated statistics seem quite safe, and possibly even safer than analyze.

I just looked at the patch (I should have looked at it much earlier -- I'm sorry).  It's simpler than I expected, but it's still about as opaque as I'd expected.  It's really hard for me to tell whether "20 1" is the right setting for moz_anno_attributes, or whether it should be "200 1", and whether the difference between 20 and 200 is significant.

There are 49 magic numbers here.  We agree that it's crucial that they yield the correct query plans -- if not, Firefox will perform very badly.

So I think the relevant question now is: Supposing we used this patch, how would you verify that these numbers yield the correct plans?

One idea would be to report the runtime of each query as a separate telemetry bucket.  Divide nightly/aurora users into two groups.  One group gets ANALYZE, the other gets the pre-populated statistics.  Let them run for a while, then compare each of the queries' runtimes and see whether pre-populated is a lot worse.  If so, that would indicate that pre-populated causes problems for some users.

This seems like a reasonable amount of testing to expect from this change.  But it also seems to me that this would be about as hard as writing a comprehensive automatic test suite.

But perhaps you have a different plan to verify that the numbers are correct?
(In reply to Justin Lebar [:jlebar] from comment #32)
> So I think the relevant question now is: Supposing we used this patch, how
> would you verify that these numbers yield the correct plans?

The code in the patch is old, a new patch may be simpler, the approach would stick though. I'd use these stats on a bunch of dbs I have locally, of various sizes (small and large), sent by different persons and check the most problematic queries on those. This is not the best approach, but it would fit with the available time resources.

> One idea would be to report the runtime of each query as a separate
> telemetry bucket.  Divide nightly/aurora users into two groups.  One group
> gets ANALYZE, the other gets the pre-populated statistics.  Let them run for
> a while, then compare each of the queries' runtimes and see whether
> pre-populated is a lot worse.  If so, that would indicate that pre-populated
> causes problems for some users.

While may work theoretically, each user has different db size, different data distribution, runs different queries (intended as different expressions in the same query), different addons. You can hope to get a good averaging of results, but that's not guaranteed considering the number of nightly testers, you'd need millions of results. What that kind of testing may detect is just large discrepancies. What would you know seeing a certain query takes 130ms for a group and 100ms for the other one, could you say for sure that difference is due to analyze? And would it be a gain if analyze takes 50ms to run?

Btw, related to this, bug 699051 is adding telemetry report of slow queries, wo we may detect really quick changes in behavior when some query gets suddenly reported more or less.

> But it also seems to me that this would be about as hard as writing a
> comprehensive automatic test suite.

Probably, I'd rather spend the same resources starting collecting queries or making a script to build random data databases.
> While may work theoretically, each user has different db size, different data distribution, runs 
> different queries (intended as different expressions in the same query), different addons.

The purpose isn't to look for users whose queries take 30ms longer, but to look for a spike of users whose queries take seconds longer.  You wouldn't need many such users in order to believe that there may be a performance problem with the hardcoded statistics, although of course you'd have to investigate manually.

Part of the problem with using a random database is that the databases in the real world don't necessarily match the expected constraints.  (For example, we saw that the DB on my phone was much bigger than it should have been.)  Using real DBs is better, but then of course you can't distribute them.

Having a tool which runs some queries against my DB and then reports whether the queries perform well would be a step in the right direction.  We could make that tool accessible through the error console.  You have to write a tool like this anyway, so you can test the DBs you have.
Depends on: 708413
Assignee: mak77 → nobody
Whiteboard: [places-next-wanted]
Status: ASSIGNED → NEW
Priority: -- → P5
wontfixing in favor of automatic Sqlite analysis in Sqlite 3.18.x (bug 1354032)
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: