Closed Bug 1522675 Opened 6 years ago Closed 6 years ago

set up k8s clusters with "vpc-native" configuration

Categories

(Taskcluster :: Services, enhancement)

enhancement
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: dustin, Assigned: dustin)

References

Details

This will be required in order to have access to postgres databases (see bug 1518251). It requires destroying and re-creating the cluster, unfortunately, so let's get it over with.

Depends on: 1503076

I've deployed this to both my own (poorly) and brian's (better) deployments.

This is a really disruptive change, and doing this with a production instance would really be awful. So I'm glad we're getting it out of the way now.

Here's a draft of an email to send to the team when this lands:


In order to use Postgres, we need to change a setting on our Kubernetes clusters. GKE does not support changing that setting without deleting the cluster. Unfortunately, Terraform does not understand the situation well enough to handle this without some assistance. Buckle up!

When you go to deploy this change to your environment:

  1. run terraform init to install the newest version of the plugins
  2. run terraform apply. It will delete your old cluster and create a new one (and takes ~5 minutes to do so). It will finish like everything's fine but actually there's nothing running in your new cluster.
  3. run terraform state list | grep k8s_manifest | xargs terraform state rm . This looks for every kubernetes manifest and tells terraform that it's gone.
  4. run terraform apply twice (until it gives you a value for cluster_ip)
  5. wherever the DNS for your deployment is defined, update the IP
  6. wait for cert-manager to get a new certificate (or use chrome, which will let you bypass the SSL warning until that's done). It doesn't take long.

Oof.

Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Component: Redeployability → Services
You need to log in before you can comment on or make changes to this bug.