1. Assume you’re running on your favourite cloud (Azure, AWS, GCP) - you don’t have to
make this work specifically for GCP GKE.
2. Create a [login to view URL] that outlines your line of thinking for the solution
3. Create plain kubernetes resources (yaml or json). Please return this file in your response
with any other materials you want to share with us.
4. You can make the following assumptions
a. Each system you’re deploying has its own isolated database. You don’t have to
worry about the type. You can assume the database is in the same region as
b. You can use any docker image you’d like for your containers. It’s just an
example, and does not have to work. Any, say, default php docker you can
deploy on a pod. What the container is does not matter - but we’ll be talking
about two different containers in the exercise, one for users, one for shifts.
c. Assume daily bell-curve scaling. High traffic during the day, low traffic during
1. We want to deploy two containers that scale independently from one another
○ Container 1: This container runs code that runs a small API that returns users
from a database
○ Container 2: This container runs code that runs a small API that returns shifts
from a database.
2. For the best user experience auto scale this service when average CPU reaches 70%
3. Ensure the deployment can handle rolling deployments and rollbacks
4. Your development team should not be able to run certain commands on your k8s cluster,
but you want them to be able to deploy and roll back. What types of IAM controls do you
put in place?
● How would you apply the configs to multiple environments (staging vs production)?
● How would you auto scale the deployment based on network latency instead of CPU?