set up k8s clusters with "vpc-native" configuration
Categories
(Taskcluster :: Services, enhancement)
Tracking
(Not tracked)
People
(Reporter: dustin, Assigned: dustin)
References
Details
This will be required in order to have access to postgres databases (see bug 1518251). It requires destroying and re-creating the cluster, unfortunately, so let's get it over with.
Assignee | ||
Comment 1•6 years ago
|
||
Assignee | ||
Comment 2•6 years ago
|
||
I've deployed this to both my own (poorly) and brian's (better) deployments.
This is a really disruptive change, and doing this with a production instance would really be awful. So I'm glad we're getting it out of the way now.
Here's a draft of an email to send to the team when this lands:
In order to use Postgres, we need to change a setting on our Kubernetes clusters. GKE does not support changing that setting without deleting the cluster. Unfortunately, Terraform does not understand the situation well enough to handle this without some assistance. Buckle up!
When you go to deploy this change to your environment:
- run
terraform init
to install the newest version of the plugins - run
terraform apply
. It will delete your old cluster and create a new one (and takes ~5 minutes to do so). It will finish like everything's fine but actually there's nothing running in your new cluster. - run
terraform state list | grep k8s_manifest | xargs terraform state rm
. This looks for every kubernetes manifest and tells terraform that it's gone. - run
terraform apply
twice (until it gives you a value for cluster_ip) - wherever the DNS for your deployment is defined, update the IP
- wait for cert-manager to get a new certificate (or use chrome, which will let you bypass the SSL warning until that's done). It doesn't take long.
Comment 3•6 years ago
|
||
Oof.
Assignee | ||
Updated•6 years ago
|
Updated•6 years ago
|
Description
•