Closed
Bug 780022
Opened 12 years ago
Closed 11 years ago
[tracker] reimage 73 linux32, linux64 ix builders as win64 builders
Categories
(Infrastructure & Operations Graveyard :: CIDuty, task)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: joduinn, Assigned: hwine)
References
Details
(Whiteboard: [tracker] [reit])
(In meeting w/Melissa this morning, and now w/hwine, we couldnt find a bug for this despite prior discussions/meetings, so filing now for tracking.)
Once we offload production linux desktop builds to AWS, (bug#772446), please reimage the linux32, linux64 ix builders as win64 builders. Increasing our win64 pool like this should help improve wait times on our win32, win64 builds.
Filing in RelEng for now, while we figure out logistics of using new slaves on release trains, and how many physical linux32,64 builders can be spared for helping win64.
Note: For the ix machines in 650castro, bug#712456 tracks upgrading the physical hardware as part of moving these machines from 650castro to a "real" colo.
Updated•12 years ago
|
Whiteboard: [tracker]
Updated•12 years ago
|
Comment 1•12 years ago
|
||
Converting the hardware that's already in scl1 is easier. Those machines include:
linux-ix-slave01.build.scl1
linux-ix-slave02.build.scl1
linux-ix-slave03.build.scl1
linux-ix-slave04.build.scl1
linux-ix-slave05.build.scl1
linux-ix-slave06.build.scl1
linux-ix-slave07.build.scl1
linux-ix-slave08.build.scl1
linux-ix-slave09.build.scl1
linux-ix-slave10.build.scl1
linux-ix-slave11.build.scl1
linux-ix-slave12.build.scl1
linux-ix-slave13.build.scl1
linux-ix-slave14.build.scl1
linux-ix-slave15.build.scl1
linux-ix-slave16.build.scl1
linux-ix-slave17.build.scl1
linux-ix-slave18.build.scl1
linux-ix-slave19.build.scl1
linux-ix-slave20.build.scl1
linux-ix-slave21.build.scl1
linux-ix-slave22.build.scl1
linux-ix-slave23.build.scl1
linux-ix-slave24.build.scl1
linux-ix-slave25.build.scl1
linux-ix-slave26.build.scl1
linux-ix-slave27.build.scl1
linux-ix-slave28.build.scl1
linux-ix-slave29.build.scl1
linux-ix-slave30.build.scl1
linux-ix-slave31.build.scl1
linux-ix-slave32.build.scl1
linux-ix-slave33.build.scl1
linux-ix-slave34.build.scl1
linux-ix-slave35.build.scl1
linux-ix-slave36.build.scl1
linux-ix-slave37.build.scl1
linux-ix-slave38.build.scl1
linux-ix-slave39.build.scl1
linux-ix-slave40.build.scl1
linux-ix-slave41.build.scl1
linux-ix-slave42.build.scl1
linux64-ix-slave01.build.scl1
linux64-ix-slave02.build.scl1
linux64-ix-slave03.build.scl1
linux64-ix-slave04.build.scl1
linux64-ix-slave05.build.scl1
linux64-ix-slave06.build.scl1
linux64-ix-slave07.build.scl1
linux64-ix-slave08.build.scl1
linux64-ix-slave09.build.scl1
linux64-ix-slave10.build.scl1
linux64-ix-slave11.build.scl1
linux64-ix-slave12.build.scl1
linux64-ix-slave13.build.scl1
linux64-ix-slave14.build.scl1
linux64-ix-slave15.build.scl1
linux64-ix-slave16.build.scl1
linux64-ix-slave17.build.scl1
linux64-ix-slave18.build.scl1
linux64-ix-slave19.build.scl1
linux64-ix-slave20.build.scl1
linux64-ix-slave21.build.scl1
linux64-ix-slave22.build.scl1
linux64-ix-slave23.build.scl1
linux64-ix-slave24.build.scl1
linux64-ix-slave25.build.scl1
linux64-ix-slave26.build.scl1
linux64-ix-slave27.build.scl1
linux64-ix-slave28.build.scl1
linux64-ix-slave29.build.scl1
linux64-ix-slave30.build.scl1
linux64-ix-slave31.build.scl1
linux64-ix-slave32.build.scl1
linux64-ix-slave33.build.scl1
linux64-ix-slave34.build.scl1
linux64-ix-slave35.build.scl1
linux64-ix-slave36.build.scl1
linux64-ix-slave38.build.scl1
linux64-ix-slave39.build.scl1
linux64-ix-slave40.build.scl1
linux64-ix-slave41.build.scl1
There are additional machines in mtv1 that need hardware upgrades to move to a datacenter, so that means additional time and effort, and we would need to find space for them in scl3 (how much space we have available in scl3 depends on how many w8 test boxes we purchase):
mv-moz2-linux-ix-slave01.build.mtv1
mv-moz2-linux-ix-slave02.build.mtv1
mv-moz2-linux-ix-slave03.build.mtv1
mv-moz2-linux-ix-slave04.build.mtv1
mv-moz2-linux-ix-slave05.build.mtv1
mv-moz2-linux-ix-slave06.build.mtv1
mv-moz2-linux-ix-slave07.build.mtv1
mv-moz2-linux-ix-slave08.build.mtv1
mv-moz2-linux-ix-slave09.build.mtv1
mv-moz2-linux-ix-slave10.build.mtv1
mv-moz2-linux-ix-slave11.build.mtv1
mv-moz2-linux-ix-slave12.build.mtv1
mv-moz2-linux-ix-slave13.build.mtv1
mv-moz2-linux-ix-slave14.build.mtv1
mv-moz2-linux-ix-slave15.build.mtv1
mv-moz2-linux-ix-slave16.build.mtv1
mv-moz2-linux-ix-slave17.build.mtv1
mv-moz2-linux-ix-slave18.build.mtv1
mv-moz2-linux-ix-slave19.build.mtv1
mv-moz2-linux-ix-slave20.build.mtv1
mv-moz2-linux-ix-slave21.build.mtv1
mv-moz2-linux-ix-slave22.build.mtv1
mv-moz2-linux-ix-slave23.build.mtv1
The remaining ix machines in mtv1 are mw32 machines and covered under a different bug.
Assignee | ||
Updated•12 years ago
|
Whiteboard: [tracker] → [tracker] [reit]
Assignee | ||
Comment 2•12 years ago
|
||
Moved the mv-moz2-linux machines to bug 784721, and the upgrade dependency with them.
No longer depends on: 712456
Assignee | ||
Comment 3•12 years ago
|
||
We will be moving all of the linux 32 & 64 bit boxes to windows 64 eventually. We will add batches that can be moved to this bug as they become available.
If there is any lead time needed before first move, please start that work if it's one shot (as done in bug 758275 comment #18)
Also, let us know a rough estimate of the turn around time on the reimaging and associated infrastructure changes per batch.
Thanks!
Comment 4•12 years ago
|
||
Based on current build/try waittimes, we can safely re-image the following machines to w64 now:
Linux (build slaves)
linux-ix-slave38.build.scl1
linux-ix-slave39.build.scl1
linux-ix-slave40.build.scl1
linux-ix-slave41.build.scl1
linux-ix-slave42.build.scl1
Linux64 (trybuild slaves)
linux64-ix-slave36.build.scl1
linux64-ix-slave38.build.scl1
linux64-ix-slave39.build.scl1
linux64-ix-slave40.build.scl1
linux64-ix-slave41.build.scl1
Updated•12 years ago
|
Assignee: hwine → dustin
Comment 6•12 years ago
|
||
Added these to DHCP in Windows will started the reimaging process shortly
Comment 7•12 years ago
|
||
(In reply to Chris Cooper [:coop] from comment #4)
These machines have been reimaged as w64.
Comment 8•12 years ago
|
||
hwine, coop, joduinn: are these all of the machines we'll be switching? If so, I'll close out this bug.
QA Contact: armenzg → arich
Updated•12 years ago
|
Assignee: mlarrain → arich
QA Contact: arich → armenzg
Assignee | ||
Comment 9•12 years ago
|
||
:arr - no, eventually all in comment #1 will be converted over time. Sounds like the first batch is done.
To avoid confusion, let do future batches in dependent bugs, and keep this one as the tracker for the full batch.
Summary: reimage linux32, linux64 ix builders as win64 builders → [tracker] reimage linux32, linux64 ix builders as win64 builders
Comment 10•12 years ago
|
||
Sounds good, I'll reassign this one to you as a tracker and you can open up specific bugs in the relops queue for batches of machines for us to reimage.
Assignee: arich → hwine
Assignee | ||
Comment 11•12 years ago
|
||
updated summary - started with 83 in this bug (42 of lin32, 41 of lin64) from comment #1
10 in production with close of bug 786035
Next step: RelEng to identify next batch to reimage
Summary: [tracker] reimage linux32, linux64 ix builders as win64 builders → [tracker] reimage 73 linux32, linux64 ix builders as win64 builders
Comment 12•12 years ago
|
||
Some of these machines could go ahead, the ones on try where we dropped non-mock builds already. See
http://buildbot-master31.srv.releng.scl3.mozilla.com:8101/buildslaves
for slaves with 'no builders' against them. Bug 804766 for *-vmw-*, I don't see a bug for the mac slaves.
Comment 13•12 years ago
|
||
What is the current status of this bug? Thanks!
Comment 14•12 years ago
|
||
The linux 32 and 64 machines were converted to foopies and mock builders. We have a handful (6 of each) left that will be decommissioned when they are no longer serving their current purpose. I don't think there's any further action left for this.
Comment 15•12 years ago
|
||
Are these the ones that are staying around a little longer? (I want to adjust slavealloc)
linux-ix-slave01.build.scl1.mozilla.com has address 10.12.48.195
linux-ix-slave02.build.scl1.mozilla.com has address 10.12.48.196
linux-ix-slave03.build.scl1.mozilla.com has address 10.12.48.197
linux-ix-slave04.build.scl1.mozilla.com has address 10.12.48.198
linux-ix-slave05.build.scl1.mozilla.com has address 10.12.48.199
linux-ix-slave06.build.scl1.mozilla.com has address 10.12.48.200
linux64-ix-slave01.build.scl1.mozilla.com has address 10.12.49.44
linux64-ix-slave02.build.scl1.mozilla.com has address 10.12.49.45
linux64-ix-slave03.build.scl1.mozilla.com has address 10.12.49.46
linux64-ix-slave04.build.scl1.mozilla.com has address 10.12.49.47
linux64-ix-slave05.build.scl1.mozilla.com has address 10.12.49.48
linux64-ix-slave06.build.scl1.mozilla.com has address 10.12.49.49
It seems that these have been decommissioned:
linux-ix-slave[07-31]
mv-moz2-linux-ix-slave[01-23]
According to the esr cycle [1] we would need to keep those machines around until Firefox 25 ships which is in 2013-10-29 [1][2]
Another alternative is if we had *vmw-* VMs (like we used to have) we could re-purpose those machines.
If we're fine to wait then we can wait.
On another, why do we have esr17 nightly builds?
[1] http://mozorg.cdn.mozilla.net/media/img/firefox/organizations/release-overview.png
[2] https://wiki.mozilla.org/RapidRelease/Calendar
Comment 16•12 years ago
|
||
Yes, we need to keep those machines for ESR17. We do nightlies on ESR so our partners can validate the builds before do the final release builds.
Assignee | ||
Comment 17•11 years ago
|
||
nothing more to do here
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Updated•11 years ago
|
Product: mozilla.org → Release Engineering
Updated•7 years ago
|
Product: Release Engineering → Infrastructure & Operations
Updated•5 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•