Closed Bug 1331397 Opened 8 years ago Closed 8 years ago

Reset prototype instance to production database

Categories

(Tree Management :: Treeherder: Infrastructure, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jgraham, Assigned: fubar)

References

(Blocks 1 open bug)

Details

wlach landed a large number of schema changes to stage/prod recently. In order to simulate deployment as closely as possible I'd like to reset prototype to current prod so that I can avoid running schema changes from upstream and test that my migrations work well.
Flags: needinfo?(klibby)
I've set SKIP_PREDEPLOY=1 on the treeherder-prototype Heroku app, to prevent the custom branch's migrations being run against the new DB (since they need to be rebased against master first), and have disabled auto-deploy and manually deployed master. Once the new RDS instance is ready (and the credentials updated away from what was in the prod snapshot), we can update DATABASE_URL and remove SKIP_PREDEPLOY.
https://github.com/mozilla-platform-ops/devservices-aws/pull/32 sets it to the lastest prod snapshot, and it has been deployed
Flags: needinfo?(klibby)
I'm unable to connect to the reset treeherder-dev instance using any of these passwords: {previous dev, current stage, current prod, former treeherder-heroku} Was the password changed after the snapshot restore? If so, what to?
Flags: needinfo?(klibby)
I believe the reset did not successfully finish. The deploy was also updating the stage storage which timed out, and while it looked like the dev reset finished there may have been steps that didn't not complete/happen because the other update failed to finish in time. Would have to look through the terraform internals to know for certain. In any case, terraform plan says it's not done, so another deploy in progress.
Flags: needinfo?(klibby)
Dev recreated from the treeherder-prod-2017-01-24-07-06 snapshot. Ed, can you check the credentials? We ran into https://github.com/hashicorp/terraform/issues/5417 and dev was completely nuked, so I suspect it has prod's creds.
Flags: needinfo?(emorley)
The reset dev instance had prod's admin user credentials, so I reset them back to the former dev ones, using: SET PASSWORD = PASSWORD('...'); (For anyone else doing this in the future, look them up via the existing heroku DATABASE_URL vars) I have also: * deployed latest master to the Heroku treeherder-prototype app * enabled auto-deploy from master * removed the `SKIP_PREDEPLOY=1` env var, so the new migrations on master run * cleared the memcachier cache However there are both cloudamqp and memcachier errors now, which I'll break out into a new bug.
Flags: needinfo?(emorley)
Depends on: 1334087
The issues in bug 1334087 are now resolved. The treeherder-prototype app is ingesting data successfully now. James, I'll leave it to you to reset the Elasticsearch content as appropriate, such that it matches the new DB content.
Assignee: nobody → klibby
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Blocks: 1268484
You need to log in before you can comment on or make changes to this bug.