What is this project?
This project sets up a complete local observability stack using the PLG pattern — Prometheus, Loki, and Grafana. It was built as part of a DevOps learning journey to understand the difference between metrics and logs, and how to correlate them in a unified dashboard. All five services run locally in Docker containers with a single command.Quickstart
Get the full stack running in under 5 minutes
Architecture
Understand how all five services connect
PromQL queries
Query CPU and RAM metrics with Prometheus
LogQL queries
Filter and search system logs with Loki
The PLG stack
| Service | Role | Host port |
|---|---|---|
| Prometheus | Scrapes and stores metrics | 9090 |
| Node Exporter | Exposes system metrics (CPU, RAM, disk) | 9100 |
| Loki | Stores and indexes log streams | 3100 |
| Promtail | Collects logs and pushes to Loki | not mapped |
| Grafana | Unified dashboard for metrics and logs | 3000 |
Key concepts
Metrics vs. logs
Metrics vs. logs
Metrics answer what is happening — numbers over time (CPU at 80%, 500 requests/sec). Prometheus collects these by scraping HTTP endpoints at a regular interval.Logs answer why it is happening — timestamped text events from running processes. Loki stores these as indexed streams pushed by Promtail.Grafana lets you view both side by side, so you can correlate a CPU spike on a graph with the error that caused it in the logs.
Pull vs. push model
Pull vs. push model
Prometheus uses a pull model — it scrapes targets on a schedule (
scrape_interval: 15s). Each target must expose a /metrics endpoint.Loki uses a push model — Promtail watches log files and pushes new entries to Loki’s HTTP API.Local-only — no cloud required
Local-only — no cloud required
Everything runs in Docker on your local machine. There are no external services, API keys, or cloud accounts required. This makes it ideal for learning and experimentation.
Prerequisites
- WSL (Ubuntu) or any Linux environment
- Docker and Docker Compose installed