A container won't start
A container won't start
Check the logs of the failing container first:Replace
<service> with the container name shown as Exited in docker compose ps — for example, prometheus, loki, promtail, node-exporter, or grafana.Common causes:-
Port already in use. If the log contains
bind: address already in use, another process on your host is occupying the same port. See the Port already in use accordion below for steps to resolve this. -
Config file syntax error. Prometheus, Loki, and Promtail all mount their config files as volumes. A YAML syntax error in any of these files prevents the container from initialising. Check
prometheus/prometheus.yml,loki/loki-config.yml, orpromtail/promtail-config.ymlfor incorrect indentation or invalid values, then run: -
Volume path issue. If Docker cannot find a file at the path specified in the
volumes:section ofdocker-compose.yml, the container exits immediately. Confirm thatprometheus/prometheus.yml,loki/loki-config.yml, andpromtail/promtail-config.ymlall exist relative to the directory where you rundocker compose.
Grafana can't connect to Prometheus
Grafana can't connect to Prometheus
Symptom: The Prometheus datasource in Grafana shows a connection error when you click Save & test.Diagnosis:
-
Confirm Prometheus is running:
The
prometheuscontainer should show statusUp. -
Check Prometheus logs for startup errors:
http://prometheus:9090, not http://localhost:9090.Grafana runs inside Docker and resolves service names over the internal Docker network. Using localhost inside the container refers to the Grafana container itself, not your host machine, so the connection will always fail. The correct URL uses the Docker service name prometheus as the hostname.Go to Connections → Data sources, select your Prometheus datasource, update the URL to http://prometheus:9090, and click Save & test.Grafana can't connect to Loki
Grafana can't connect to Loki
Symptom: The Loki datasource in Grafana shows a connection error, or log queries return no data.Diagnosis:
-
Confirm Loki is running:
The
lokicontainer should show statusUp. -
Check Loki logs for errors:
-
Confirm Loki is listening on port
3100by checking its config:
http://loki:3100, not http://localhost:3100. Like Prometheus, Grafana resolves loki as a hostname over the internal Docker network.Go to Connections → Data sources, select your Loki datasource, update the URL to http://loki:3100, and click Save & test.No logs in Loki / Promtail not collecting
No logs in Loki / Promtail not collecting
Symptom: Loki datasource connects successfully in Grafana, but log queries such as
{job="varlogs"} return no results.Diagnosis:-
Check Promtail logs for collection errors:
Look for messages about file discovery or push failures.
-
Confirm the
/var/logvolume is mounted correctly indocker-compose.yml:If the/var/log:/var/log:roline is missing or the path is wrong, Promtail has nothing to read. -
Verify the scrape path in
promtail/promtail-config.yml:The__path__label must match files that actually exist inside the container at/var/log/*.log. If your host system stores logs elsewhere, update the volume mount and__path__to match. -
Confirm Promtail can reach Loki. Its push URL is configured as:
If Loki is not running or the URL is wrong, Promtail will log push errors.
Prometheus not scraping Node Exporter
Prometheus not scraping Node Exporter
Symptom: Metrics like
node_cpu_seconds_total are missing from Prometheus, or the node-exporter target shows as DOWN on the Prometheus targets page.Diagnosis:-
Open the Prometheus targets page at http://localhost:9090/targets and check the status of the
node-exporterjob. -
Check Node Exporter logs:
-
Confirm the scrape target in
prometheus/prometheus.ymlis set tonode-exporter:9100:Prometheus resolvesnode-exporteras a hostname over the internal Docker network. If this is set tolocalhost:9100, scraping will fail becauselocalhostinside the Prometheus container refers to the Prometheus container itself. -
Confirm Node Exporter is running:
prometheus.yml is incorrect, update it to use node-exporter:9100 as the target, then force-recreate Prometheus to pick up the change:Port already in use
Port already in use
Symptom: A container exits immediately and The ports used by each service are:
Fix:You have two options:
docker compose logs <service> contains bind: address already in use.Diagnosis:Identify which process is occupying the port. For example, to check port 9090:| Service | Host port |
|---|---|
| Grafana | 3000 |
| Loki | 3100 |
| Prometheus | 9090 |
| Node Exporter | 9100 |
-
Stop the conflicting process and run
docker compose up -dagain. -
Change the host port for the affected service in
docker-compose.yml. The format is"<host-port>:<container-port>". For example, to move Prometheus to host port19090:After editing, recreate the container:Remember to update any URLs that reference the old port — such as the Prometheus datasource URL in Grafana or the link you use to open the Prometheus UI.