Update searchfox for ESR102, both mozilla-esr102 and comm-esr102
Categories
(Webtools :: Searchfox, task)
Tracking
(Not tracked)
People
(Reporter: gbrown, Assigned: asuth)
References
Details
We need to add esr102 to the list of branches indexed by searchfox
Prior art: Bug 1717535 (ESR91)
Assignee | ||
Updated•3 years ago
|
Assignee | ||
Comment 1•2 years ago
|
||
Looks like the .cron.yml for esr102 needs the branch added to run searchfox indexing jobs. Right now I only see esr91. I don't understand if that change should happen in mozilla-central and then will flow into esr102 or if the change should be made directly in esr102. (Although I do understand what matters is that it ends up in .cron.yml on the esr102 branch for anything to happen; mozilla-central's version is not consulted when scheduling esr102 tree jobs.)
Reporter | ||
Comment 2•2 years ago
|
||
It is on mozilla-central already: https://hg.mozilla.org/mozilla-central/rev/ec38c0929c7e
We haven't yet had any "real" merges from 102 to mozilla-esr102. I think that will happen, naturally, soon (after Monday's release, maybe?), but this is my first involvement with an esr release - I'm unsure also.
Assignee | ||
Comment 3•2 years ago
|
||
Okay, great, then I can set this up once that happens and we're seeing the tasks and their artifacts! Thanks for the update!
Assignee | ||
Comment 4•2 years ago
|
||
The new ESR probably wants to go in config4.
From https://bugzilla.mozilla.org/show_bug.cgi?id=1567724#c34:
config2's indexer duration until the web-server is initiated is ~3h55m (down from ~7h52m), config4's indexer duration until the web-server is initiated is ~2h16m (down from ~3h24m), and config1 is ~1h52m (down from ~2h09m).
Reporter | ||
Comment 5•2 years ago
|
||
The first searchfox cron jobs have run on esr-102 now: https://treeherder.mozilla.org/jobs?repo=mozilla-esr102&searchStr=searchfox
Reporter | ||
Comment 6•2 years ago
|
||
:asuth - Reminder that esr-102 will be released soon, ~June 27. Are you going to take this? Need anything from me?
Assignee | ||
Comment 7•2 years ago
|
||
My attempt at finding a volunteer in https://chat.mozilla.org/#/room/#searchfox:mozilla.org for this unfortunately didn't pan out, so I'll take this.
Assignee | ||
Comment 8•2 years ago
|
||
(In reply to Andrew Sutherland [:asuth] (he/him) from comment #4)
The new ESR probably wants to go in config4.
Changed my mind on this for 2 reasons:
- config4 is basically all non-m-c branches and currently has a higher chance of infrastructure-related failures, so I think it makes sense to avoid keeping more-supported branches like esr102 on config2 which is beta/release/ESRs.
- esr78 no longer has searchfox jobs and so no longer is semantically indexed, so it can easily be rotated out to config3 which has plenty of space (because semantic indexing is what takes up space). This apparently happened back in November as part of bug 1738908, so I guess the searchfox artifacts must have lifetimes that are way too long (or were artificially extended?) given that config2 never fell over from this.
So, I'm going to:
- move the esr78s to config3
- add mozilla-esr102 to config2
- I'm looking at adding comm-esr102 as well at the same time since there are synergies in doing this at the same time and :kats demonstrated in bug 1726109 that this is a lot less scary than I thought it had been when I added mozilla-esr91.
- I'm just triggering a shell against config1 and am going to fork the comm-central tarballs we already have since it seems like this is most expedient and any irrelevant processed blame commits won't really matter that much (and can be gc'ed if I do the repo reconfiguration correctly)
Now that our indexing times are much, much, much faster for config2 I'm not going to worry so much about pivoting and the load balancer; some ESRs may may be unavailable for a few hours or so in the worst case on this weekend day when no one should be working.
Assignee | ||
Comment 9•2 years ago
|
||
It seems like there aren't really any synergies from forking comm-central based on what build-blame is doing, and I'm worried about my git branch shenanigans complicating things, so I'm going to just follow the very clear steps in https://github.com/mozsearch/mozsearch/blob/master/docs/newrepo.md in the interest of simplifying everyone's lives.
Assignee | ||
Comment 10•2 years ago
|
||
One thing I'm not sure I realized is that for the "comm-central" repo we do explicitly check out mozilla-central as a "mozilla" sub-directory, but that we don't do this for any of the "comm-esr*" branches. I'm sticking with this convention that really simplifies things a lot for comm-esr102.
blame is all built thanks to comm-central's smaller revision history and :kats many efforts to speed up blame-building (woo!) so I'm going to start pushing things and re-triggering things and reconfiguring load-balancers. Only the esr78 jobs will experience any outage as they transition.
I'm doing 2 extra things for synergy purposes at the same time:
- Implementing bug 1775146 which is having the help.html list the wubkat repo and setting up a cron job for the repo given that it does seem that the webkit-search igalia runs has been having outages and it has been impacting gecko platform engineers.
- Adding a weekly cron job for config3 just because the need to manually trigger this has resulted in technical debt issues where:
- Configurations break but we don't notice until we're doing something else and then we have to fix whatever broke plus whatever was motivating the change to config3.
- Static CSS resources are still shared across all indexers rather than being per-tree, which frequently results in roll-out issues with changes to static files. We do have the bug on file about making the static resources per-tree, but having config3 never be a week behind generally seems like a good thing.
It's also the case that having lambda jobs for all of the configN indexers is nice because it makes it possible to re-trigger them from the lambda UI by hitting the test button which avoids the need to run trigger_indexer.py manually from a local shell.
Assignee | ||
Comment 11•2 years ago
|
||
I think this all worked out. I sent an email to dev-platform since the root HTML page isn't the most discoverable way to find out about new indexes, plus the wubkat index was more notable and did merit a mail for sure.
Landed patches for/related to this:
Description
•