Programming lesson
Deploying Microservices on Kubernetes: A Step-by-Step Tutorial for Enterprise Software Architecture
Learn to deploy microservices on Kubernetes with this hands-on tutorial covering deployments, services, NodePort, and YAML configuration. Perfect for CS 548 students and enterprise architects.
Introduction: Why Kubernetes Matters in 2026
In the rapidly evolving landscape of enterprise software architecture, Kubernetes has become the de facto standard for container orchestration. As of May 2026, with AI-driven applications and real-time data processing at an all-time high, understanding how to deploy microservices on Kubernetes is essential for software engineers. This tutorial guides you through deploying a database server and a microservice into a local Kubernetes cluster, using concepts from a typical enterprise architecture assignment. Whether you are a student in CS 548 or a professional looking to sharpen your DevOps skills, this step-by-step guide will help you master Kubernetes deployments, services, and networking.
Prerequisites: Setting Up Your Environment
Before diving into deployment, ensure you have Docker Desktop installed with Kubernetes enabled. On macOS or Windows, navigate to Docker Desktop Settings and enable Kubernetes. The kubectl command-line tool is included with Docker Desktop. Verify your context is set to docker-desktop using:
$ kubectl config get-contexts
$ kubectl config use-context docker-desktopThis tutorial assumes you have built Docker images for your microservices (e.g., cs548/clinic-database and cs548/clinic-domain:1.0.0) and they are available locally.
Step 1: Deploying the Database Server as a Pod
First, create a YAML file named clinic-database-deploy.yaml for the database deployment. This deployment ensures one replica of the database pod runs with environment variables for PostgreSQL credentials.
apiVersion: apps/v1
kind: Deployment
metadata:
name: clinic-database
labels:
app: clinic-database
spec:
replicas: 1
selector:
matchLabels:
app: clinic-database
template:
metadata:
labels:
app: clinic-database
spec:
restartPolicy: Always
containers:
- name: clinic-database
image: cs548/clinic-database
env:
- name: POSTGRES_PASSWORD
value: XXXXXX
- name: DATABASE_PASSWORD
value: YYYYYY
imagePullPolicy: NeverApply the deployment and check the pod status:
$ kubectl apply -f clinic-database-deploy.yaml
$ kubectl get pods
$ kubectl describe pod <pod-name>The imagePullPolicy: Never tells Kubernetes to use the local image, which is perfect for local development. This is analogous to using a local database during the early development phase of a gaming app, where you want fast iteration without network latency.
Step 2: Exposing the Database with a Service
To allow other pods and your local machine to connect to the database, create a service file clinic-database-service.yaml. Using NodePort type exposes the database outside the cluster on a high port.
apiVersion: v1
kind: Service
metadata:
name: clinic-database
labels:
app: clinic-database
spec:
type: NodePort
ports:
- name: jdbc
port: 5432
targetPort: 5432
selector:
app: clinic-databaseApply the service and retrieve the node port:
$ kubectl apply -f clinic-database-service.yaml
$ kubectl get service clinic-database
$ kubectl describe service clinic-databaseYou will see output similar to:
Name: clinic-database
Type: NodePort
IP: 10.107.176.144
Port: jdbc 5432/TCP
TargetPort: 5432/TCP
NodePort: jdbc 31338/TCP
Endpoints: 10.1.0.8:5432The NodePort (e.g., 31338) is the port on your localhost that forwards to the database container. You can now connect from IntelliJ IDEA or psql using localhost:31338. This is similar to how a mobile app backend exposes a port for database connections during development.
Step 3: Deploying the Domain Microservice
Next, deploy the microservice that handles business logic. Create clinic-domain-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: clinic-domain
labels:
app: clinic-domain
spec:
replicas: 1
selector:
matchLabels:
app: clinic-domain
template:
metadata:
labels:
app: clinic-domain
spec:
restartPolicy: Always
containers:
- name: clinic-domain
image: cs548/clinic-domain:1.0.0
env:
- name: QUARKUS_DATASOURCE_USERNAME
value: clinicuser
- name: QUARKUS_DATASOURCE_PASSWORD
value: YYYYYY
imagePullPolicy: NeverApply and verify:
$ kubectl apply -f clinic-domain-deployment.yaml
$ kubectl get pods
$ kubectl describe pod <pod-name>
$ kubectl logs <pod-name>Notice the environment variables match those from your previous assignment. This microservice will connect to the database using the service name clinic-database as the hostname, thanks to Kubernetes DNS.
Step 4: Exposing the Microservice
Create a service for the domain microservice: clinic-domain-service.yaml
apiVersion: v1
kind: Service
metadata:
name: clinic-domain
labels:
app: clinic-domain
spec:
type: NodePort
ports:
- name: http
port: 8080
selector:
app: clinic-domainApply and get the node port:
$ kubectl apply -f clinic-domain-service.yaml
$ kubectl get service clinic-domain
$ kubectl describe service clinic-domainYou can now test the microservice's REST endpoints using the node port, e.g., http://localhost:31435/api/. This setup mirrors how a real-world enterprise application would expose its API gateway.
Understanding Kubernetes Concepts
Deployments vs. Pods
A Deployment manages a set of identical pods, ensuring the desired number of replicas are running. In this tutorial, we used replicas: 1 for simplicity, but in production, you might scale to multiple replicas for high availability. This is analogous to how a popular AI chatbot service scales its backend instances during peak usage.
Services and Networking
A Service provides a stable endpoint to access a set of pods. NodePort is one type that exposes the service on a static port on each node's IP. Other types include ClusterIP (internal only) and LoadBalancer (for cloud deployments). In cloud environments like AWS EKS, you would use a LoadBalancer to distribute traffic.
Environment Variables and Configuration
Passing database credentials via environment variables is a common pattern. For production, consider using Kubernetes Secrets to manage sensitive data securely. This is especially important in fintech or healthcare applications where data privacy is critical.
Best Practices and Troubleshooting
- Use
imagePullPolicy: IfNotPresentfor local development to avoid pulling from remote registries unnecessarily. - Check pod logs with
kubectl logs <pod-name>to debug startup issues. - Verify service connectivity by running a temporary pod with
kubectl run -it --rm busybox -- shand usingwgetorcurl. - Monitor resource usage with
kubectl top podsto ensure your containers have enough CPU and memory.
Conclusion
You have successfully deployed a database server and a microservice on Kubernetes using YAML configurations. This workflow is fundamental to modern enterprise software architecture, enabling scalable, resilient applications. As you progress, explore Kubernetes features like ConfigMaps, Secrets, Ingress controllers, and Helm charts. Whether you are building the next big gaming platform, a fintech app, or an AI-driven analytics tool, Kubernetes gives you the power to orchestrate containers with confidence.
Happy deploying!