Skip to main content

What is this project?

This project sets up a complete local observability stack using the PLG pattern — Prometheus, Loki, and Grafana. It was built as part of a DevOps learning journey to understand the difference between metrics and logs, and how to correlate them in a unified dashboard. All five services run locally in Docker containers with a single command.

Quickstart

Get the full stack running in under 5 minutes

Architecture

Understand how all five services connect

PromQL queries

Query CPU and RAM metrics with Prometheus

LogQL queries

Filter and search system logs with Loki

The PLG stack

ServiceRoleHost port
PrometheusScrapes and stores metrics9090
Node ExporterExposes system metrics (CPU, RAM, disk)9100
LokiStores and indexes log streams3100
PromtailCollects logs and pushes to Lokinot mapped
GrafanaUnified dashboard for metrics and logs3000

Key concepts

Metrics answer what is happening — numbers over time (CPU at 80%, 500 requests/sec). Prometheus collects these by scraping HTTP endpoints at a regular interval.Logs answer why it is happening — timestamped text events from running processes. Loki stores these as indexed streams pushed by Promtail.Grafana lets you view both side by side, so you can correlate a CPU spike on a graph with the error that caused it in the logs.
Prometheus uses a pull model — it scrapes targets on a schedule (scrape_interval: 15s). Each target must expose a /metrics endpoint.Loki uses a push model — Promtail watches log files and pushes new entries to Loki’s HTTP API.
Everything runs in Docker on your local machine. There are no external services, API keys, or cloud accounts required. This makes it ideal for learning and experimentation.

Prerequisites

  • WSL (Ubuntu) or any Linux environment
  • Docker and Docker Compose installed
New to the stack? Start with the Quickstart to get everything running, then explore the Architecture page to understand how the pieces fit together.