Kubernetes brings many advantages like scaling, load balancing, and high availability for service deployments. In this post, we’re going to deploy a Go web service to Kubernetes.
Prerequisites
The Go web service
We’re starting with a simple web service that returns Hello World to an HTTP GET. We’ll use the net/http package that is part of the Go standard library.
mkdir -p deploy-go-to-k8s/src cd deploy-go-to-k8s go mod init webservice
Open the folder in your favourite Go editor and create the file src/main.go
package main import ( "fmt" "net/http" ) func main() { http.HandleFunc("/", httpHandleFunc) http.ListenAndServe(":8080", nil) } func httpHandleFunc(w http.ResponseWriter, r *http.Request) { fmt.Println("was called") w.Write([]byte("<h1>Hello World</h1>")) }
Run and test the webservice in a shell:
main.go
go run main.go # in another window curl localhost:8080 # Output: <h1>Hello World</h1>
Build a docker image for the web service
To build a Docker image we need a Dockerfile. We’re using the multi-stage build feature of Docker here to:
- Build an executable from the Go source code (buildstage)
- Build an image to run the executable webservice
Dockerfile
# Stage 1 building the executable from source FROM golang:alpine as buildstage COPY ./src /src WORKDIR /src # build with cross compiling disabled to make the executable smaller RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ./bin/webservice #Stage 2 producing a runnable Docker image FROM scratch COPY --from=buildstage /src/bin/webservice /webservice ENTRYPOINT [ "./webservice" ]
Now we’ve to build the image. We’re using the Docker daemon of minikube because we need the docker image to be available for minikube in further steps.
# set the docker daemon (bash shell) eval $(minikube docker-env) # for powershell: & minkube docker-env | Invoke-Expression #the tag 1.0.0 is important because minikube tries to pull the image from dockerhub for :latest tags docker build -t docker build -t go-webservice:1.0.0 .
Test that we image works as expected:
# Run the container docker run -ti -p 8080:8080 go-webservice:1.0.0 # in another window curl localhost:8080 # Output <h1>Hello World</h1>
Deploy the web service to kubernetes
Now that we’ve built the docker image, we can use it to deploy the web service to Kubernetes. The basic unit of execution in Kubernetes is a Pod, which can contain one or multiple container images. Further info to the concept of Kubernetes Pods. To describe our web service with a Pod, we use the following resource definition:
pod.yml
apiVersion: v1 kind: Pod metadata: name: go-webservice-pod labels: app: go-webservice spec: containers: - name: go-webservice image: go-webservice:1.0.0
Run the Pod:
kubectl apply -f pod.yml # port-forward makes the pod port 8080 available at localhost:8080 kubectl port-forward go-webservice-pod 8080:8080 # In another terminal run curl localhost:8080 # Output: <h1>Hello World</h1> # Cleanup kubectl delete go-webservice-pod
The Pod is up and running, and the web service returns the expected result. But running standalone Pods is not the best way to deploy our web service. If the Pod crashes for any reason or gets deleted, it won’t get up again. Furthermore, if the load on our webservice increases, we are not able to scale it up.
Kubernetes has more resilient resources to run Pods. One of these resources kinds is Deployment. It’s the state of the art resource for stateless workloads like our Go web service. It enables automatic restarts on failure, scalability, and high availability for our web service.
A deployment consists of meta attributes for the resource itself and a template for Pods, which should be executed by the Deployment. We can specify the number (replicas) of Pods to run and how the deployment can identify these Pods. Here’s the resource definition:
deployment.yml
apiVersion: apps/v1 kind: Deployment metadata: name: go-webservice-deployment labels: app: go-webservice spec: # Number of Pods to run for this Deployment replicas: 1 selector: # How Pods can be identified matchLabels: app: go-webservice template: # Pod Template Definition metadata: labels: # The label queried by the selector above app: go-webservice spec: containers: - name: go-webservice image: go-webservice:1.0.0
Test the web service deployment:
kubectl apply -f deployment.yml kubectl get pods # returns a pod called go-webservice-deployment-<random_id> # If you delete the pod, Kubernetes will schedule another instance of the go-webservice pod kubectl delete pod go-webservice-deployment-<random id> kubectl get pods # returns a pod called go-webservice-deployment-<other_id> # Change the replica count in the deployment.yml file to 3 kubectl apply -f deployment.yml kubectl get pods # there are now 3 instances of the pod running
The deployment gave us the ability to kill running pods, and they got rescheduled immediately. Additionally, we can scale our Go web service horizontally. But there is a problem that arises when running multiple instances of a webservice. We need a way to split incoming traffic across the web service instances. To do so, Kubernetes has the Service Resource:
service.yml
apiVersion: v1 kind: Service metadata: name: my-service spec: # Notice that the selector definition is the same as the labels definition for the pods selector: app: go-webservice type: NodePort ports: - protocol: TCP port: 8080 targetPort: 8080 # use a nodePort service here to make the service available outside the cluster nodePort: 30000
The Service can identify Pods we want to route traffic to with the label we gave them in the deployment.yml descriptor. We use a Service of type NodePort to make the service available from outside of the local minikube cluster. With the NodePort Service, we can test that the service routes traffic to all running go-webservice pods by calling the minikube IP with the declared port. More info on Kubernetes Service Types.
Test the Service:
# Get the IP of your minikube node minikube ip # Calls the service 10 times for i in `seq 10`; do curl <minikube_ip>:30000; done # We can verify that all instances have been called by printing the logs of each pod kubectl logs go-webservice-deployment-<deployment_id>-<pod_id>
In a production environment, you would probably use a Service attached to an external load balancer or a Service of type ClusterIP combined with an IngressController. You can read about the different Service Types and IngressControllers in the Kubernetes docs or one of the upcoming blog posts.