How to Launch Zipkin and Sentry in a Local Kind Cluster Using Terraform and Helm


In modern software development, monitoring and observability are crucial for maintaining the health and performance of applications. Zipkin and Sentry are two powerful tools that can be used to track errors and distributed traces in your applications. In this article, we’ll guide you through the process of deploying Zipkin and Sentry on a local Kubernetes cluster managed by Kind, using Terraform and Helm. This setup provides a robust monitoring stack that you can run locally for development and testing.

Overview

This guide describes a Terraform project designed to deploy a monitoring stack with Sentry for error tracking and Zipkin for distributed tracing on a Kubernetes cluster managed by Kind. The project automates the setup of all necessary Kubernetes resources, including namespaces and Helm releases for both Sentry and Zipkin.

Tech Stack

  • Kind: A tool for running local Kubernetes clusters using Docker containers as nodes.
  • Terraform: Infrastructure as Code (IaC) tool used to manage the deployment.
  • Helm: A package manager for Kubernetes that simplifies the deployment of applications.

Prerequisites

Before you start, make sure you have the following installed and configured:

  • Kubernetes cluster: We’ll use Kind for this local setup.
  • Terraform: Installed on your local machine.
  • Helm: Installed for managing Kubernetes packages.
  • kubectl: Configured to communicate with your Kubernetes cluster.

Project Structure

Here are the key files in the project:

  • provider.tf: Sets up the Terraform provider configuration for Kubernetes.
  • sentry.tf: Defines the Terraform resources for deploying Sentry using Helm.
  • zipkin.tf: Defines the Kubernetes resources necessary for deploying Zipkin.
  • zipkin_ingress.tf: Sets up the Kubernetes Ingress resource for Zipkin to allow external access.
Example: zipkin.tf
resource "kubernetes_namespace" "zipkin" {
  metadata {
    name = "zipkin"
  }
}

resource "kubernetes_deployment" "zipkin" {
  metadata {
    name      = "zipkin"
    namespace = kubernetes_namespace.zipkin.metadata[0].name
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        app = "zipkin"
      }
    }

    template {
      metadata {
        labels = {
          app = "zipkin"
        }
      }

      spec {
        container {
          name  = "zipkin"
          image = "openzipkin/zipkin"

          port {
            container_port = 9411
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "zipkin" {
  metadata {
    name      = "zipkin"
    namespace = kubernetes_namespace.zipkin.metadata[0].name
  }

  spec {
    selector = {
      app = "zipkin"
    }

    port {
      port        = 9411
      target_port = 9411
    }

    type = "NodePort"
  }
}
Example: sentry.tf
resource "kubernetes_namespace" "sentry" {
  metadata {
    name = var.sentry_app_name
  }
}

resource "helm_release" "sentry" {
  name       = var.sentry_app_name
  namespace  = var.sentry_app_name
  repository = "https://sentry-kubernetes.github.io/charts"
  chart      = "sentry"
  version    = "22.2.1"
  timeout    = 900

  set {
    name  = "ingress.enabled"
    value = var.sentry_ingress_enabled
  }

  set {
    name  = "ingress.hostname"
    value = var.sentry_ingress_hostname
  }

  set {
    name  = "postgresql.postgresqlPassword"
    value = var.sentry_postgresql_postgresqlPassword
  }

  set {
    name  = "kafka.podSecurityContext.enabled"
    value = "true"
  }

  set {
    name  = "kafka.podSecurityContext.seccompProfile.type"
    value = "Unconfined"
  }

  set {
    name  = "kafka.resources.requests.memory"
    value = var.kafka_resources_requests_memory
  }

  set {
    name  = "kafka.resources.limits.memory"
    value = var.kafka_resources_limits_memory
  }

  set {
    name  = "user.email"
    value = var.sentry_user_email
  }

  set {
    name  = "user.password"
    value = var.sentry_user_password
  }

  set {
    name  = "user.createAdmin"
    value = var.sentry_user_create_admin
  }

  depends_on = [kubernetes_namespace.sentry]
}

Configuration

Before deploying, you need to adjust the configurations in terraform.tfvars to match your environment. This includes settings related to Sentry and Zipkin. Additionally, ensure that the following entries are added to your /etc/hosts file to map the local domains to your localhost:

127.0.0.1       sentry.local
127.0.0.1       zipkin.local

Step 1: Create a Kind Cluster

Clone the repository containing your Terraform and Helm configurations, and create a Kind cluster using the following command:

kind create cluster --config prerequisites/kind-config.yaml

Step 2: Set Up the Ingress NGINX Controller

Next, set up an Ingress NGINX controller, which will manage external access to the services within your cluster. Apply the Ingress controller manifest:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Wait for the Ingress controller to be ready to process requests:

kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=90s

Step 3: Initialize Terraform

Navigate to the project directory where your Terraform files are located and initialize Terraform:

terraform init

Step 4: Apply the Terraform Configuration

To deploy Sentry and Zipkin, apply the Terraform configuration:

terraform apply

This command will provision all necessary resources, including namespaces, Helm releases for Sentry, and Kubernetes resources for Zipkin.

Step 5: Verify the Deployment

After the deployment is complete, you can verify the status of your resources by running:

kubectl get all -A

This command lists all resources across all namespaces, allowing you to check if everything is running as expected.

Step 6: Access Sentry and Zipkin

Once the deployment is complete, you can access the Sentry and Zipkin dashboards through the following URLs:

These URLs should open the respective web interfaces for Sentry and Zipkin, where you can start monitoring errors and trace requests across your applications.

Additional Tools

For a more comprehensive view of your Kubernetes resources, consider using the Kubernetes dashboard, which provides a user-friendly interface for managing and monitoring your cluster.

Cleanup

If you want to remove the deployed infrastructure, run the following command:

terraform destroy

This command will delete all resources created by Terraform. To remove the Kind cluster entirely, use:

kind delete cluster

This will clean up the cluster, leaving your environment as it was before the setup.

Conclusion

By following this guide, you’ve successfully deployed a powerful monitoring stack with Zipkin and Sentry on a local Kind cluster using Terraform and Helm. This setup is ideal for local development and testing, allowing you to monitor errors and trace requests across your applications with ease. With the flexibility of Terraform and Helm, you can easily adapt this configuration to suit other environments or expand it with additional monitoring tools.