Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alloy not able to push logs to loki #2818

Open
navnitkum opened this issue Feb 24, 2025 · 1 comment
Open

alloy not able to push logs to loki #2818

navnitkum opened this issue Feb 24, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@navnitkum
Copy link

What's wrong?

hi

discovery.kubernetes "kubernetes_events" {
	role = "pod"
}

discovery.relabel "kubernetes_events" {
	targets = discovery.kubernetes.kubernetes_events.targets

	rule {
		source_labels = ["__meta_kubernetes_pod_name"]
		regex         = "kubernetes-event-exporter-.*"
		action        = "keep"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
		separator     = "/"
		target_label  = "__path__"
		replacement   = "/var/log/pods/*$1/*.log"
	}

	rule {
		target_label = "job"
		replacement  = "kubernetes-events"
	}

	rule {
		source_labels = ["namespace"]
		target_label  = "namespace"
	}

	rule {
		source_labels = ["reason"]
		target_label  = "reason"
	}
}

local.file_match "kubernetes_events" {
	path_targets = discovery.relabel.kubernetes_events.output
}

loki.process "kubernetes_events" {
	forward_to = [loki.write.default.receiver]

	stage.json {
		expressions = {
			component            = "source.component",
			event_type           = "type",
			involved_object      = "involvedObject.kind",
			involved_object_name = "involvedObject.name",
			message              = "message",
			namespace            = "metadata.namespace",
			reason               = "reason",
			timestamp            = "lastTimestamp",
		}
	}

	stage.tenant {
		value = "mgmt"
	  }

	stage.drop {
		expression = "event_type =~ \"DEBUG|INFO|debug|info\""
	}

	stage.labels {
		values = {
			job       = "kubernetes-events",
			namespace = null,
			reason    = null,
		}
	}
}

loki.source.file "kubernetes_events" {
	targets               = local.file_match.kubernetes_events.targets
	forward_to            = [loki.process.kubernetes_events.receiver]
	legacy_positions_file = "/run/promtail/positions.yaml"
}

discovery.kubernetes "kubernetes_pods" {
	role = "pod"

	namespaces {
		names = ["atlantis", "opentelemetry-operator-system", "argocd", "ingress-nginx", "loki"]
	}
}

discovery.relabel "kubernetes_pods" {
	targets = discovery.kubernetes.kubernetes_pods.targets

	rule {
		source_labels = ["__meta_kubernetes_pod_controller_name"]
		regex         = "([0-9a-z-.]+?)(-[0-9a-f]{8,10})?"
		target_label  = "__tmp_controller_name"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name", "__meta_kubernetes_pod_label_app", "__tmp_controller_name", "__meta_kubernetes_pod_name"]
		regex         = "^;*([^;]+)(;.*)?$"
		target_label  = "app"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_instance", "__meta_kubernetes_pod_label_instance"]
		regex         = "^;*([^;]+)(;.*)?$"
		target_label  = "instance"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_component", "__meta_kubernetes_pod_label_component"]
		regex         = "^;*([^;]+)(;.*)?$"
		target_label  = "component"
	}

	rule {
		source_labels = ["namespace", "app"]
		separator     = "/"
		target_label  = "job"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_name"]
		target_label  = "pod"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
		separator     = "/"
		target_label  = "__path__"
		replacement   = "/var/log/pods/*$1/*.log"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash", "__meta_kubernetes_pod_annotation_kubernetes_io_config_hash", "__meta_kubernetes_pod_container_name"]
		separator     = "/"
		regex         = "true/(.*)"
		target_label  = "__path__"
		replacement   = "/var/log/pods/*$1/*.log"
	}

	rule {
		source_labels = ["__meta_kubernetes_pod_label_dag_id"]
		target_label  = "dag"
	}
}

local.file_match "kubernetes_pods" {
	path_targets = discovery.relabel.kubernetes_pods.output
}

loki.process "kubernetes_pods" {
	forward_to = [loki.write.default.receiver]

	stage.cri { }

	stage.label_drop {
		values = ["filename", "job", "pod"]
	}
}

loki.source.file "kubernetes_pods" {
	targets               = local.file_match.kubernetes_pods.targets
	forward_to            = [loki.process.kubernetes_pods.receiver]
	legacy_positions_file = "/run/promtail/positions.yaml"
}

loki.write "default" {
	endpoint {
		url     = "http://loki-gateway.loki.svc.cluster.local/loki/api/v1/push"
		headers = {
			"X-Scope-OrgID" = "mgmt",
		}
	}
	external_labels = {}
}

i am using the above config for pushing logs from alloy to loki
but i cant see any logs in loki for any of the apps while querying from grafana, though using similar config in promtail i am able to see logs for the pods
is there anything i am missing ?
i mean how alloy logs would look like if any logs are being captured by alloy

since below are my alloy logs which show no sign that its capturing pod logs
ts=2025-02-24T10:52:38.971913689Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=labelstore duration=19.446µs
ts=2025-02-24T10:52:38.972405183Z level=info msg="Using pod service account via in-cluster config" component_path=/ component_id=discovery.kubernetes.kubernetes_events
ts=2025-02-24T10:52:38.972852584Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=discovery.kubernetes.kubernetes_events duration=844.973µs
ts=2025-02-24T10:52:38.97335997Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=discovery.relabel.kubernetes_events duration=386.069µs
ts=2025-02-24T10:52:38.973561666Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=local.file_match.kubernetes_events duration=96.776µs
ts=2025-02-24T10:52:38.97390704Z level=info msg="no legacy positions file found" component_path=/ component_id=loki.source.file.kubernetes_events path=/run/promtail/positions.yaml
ts=2025-02-24T10:52:38.974277541Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=loki.source.file.kubernetes_events duration=597.662µs
ts=2025-02-24T10:52:38.974411579Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=otel duration=26.949µs
ts=2025-02-24T10:52:38.974550294Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 node_id=tracing duration=26.832µs
ts=2025-02-24T10:52:38.974655563Z level=info msg="finished complete graph evaluation" controller_path=/ controller_id="" trace_id=ce71e4cdc0230615bf94887665559d40 duration=10.715976ms
ts=2025-02-24T10:52:38.97497407Z level=info msg="scheduling loaded components and services"
ts=2025-02-24T10:52:38.976148979Z level=info msg="starting cluster node" service=cluster peers_count=0 peers="" advertise_addr=127.0.0.1:12345
ts=2025-02-24T10:52:38.976835149Z level=info msg="peers changed" service=cluster peers_count=1 peers=alloy-7xtkk
ts=2025-02-24T10:52:38.978235293Z level=info msg="now listening for http traffic" service=http addr=0.0.0.0:12345

Steps to reproduce

server:
  log_level: info
  log_format: logfmt
  http_listen_port: 3101
  

clients:
  - headers:
      X-Scope-OrgID: mgmt
    url: http://loki-gateway.loki.svc.cluster.local/loki/api/v1/push

positions:
  filename: /run/promtail/positions.yaml

scrape_configs:
  # See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
  - job_name: kubernetes-events
    kubernetes_sd_configs:
      - role: pod
  
    relabel_configs:
      # Keep only event-exporter logs
      - source_labels: [__meta_kubernetes_pod_name]
        regex: kubernetes-event-exporter-.*
        action: keep
      - source_labels: [__meta_kubernetes_pod_uid, __meta_kubernetes_pod_container_name]
        separator: "/"
        target_label: __path__
        replacement: "/var/log/pods/*$1/*.log"
      - target_label: job
        replacement: kubernetes-events
      - source_labels: ["namespace"]
        target_label: namespace
        action: replace
      - source_labels: ["reason"]
        target_label: reason
        action: replace         
    pipeline_stages:
      - json:
          expressions:
            reason: reason
            message: message
            component: source.component
            namespace: metadata.namespace
            event_type: type
            involved_object: involvedObject.kind
            involved_object_name: involvedObject.name
            timestamp: lastTimestamp
      - drop:
          expression: 'event_type =~ "DEBUG|INFO|debug|info"'
      # Add a static label for job_name
      - labels:
          job: "kubernetes-events"
          reason:
          namespace:
  - job_name: kubernetes-pods
    pipeline_stages:
      - cri: {}
      - labeldrop:
        - filename
        - job
        - pod
    kubernetes_sd_configs:
      - role: pod
        namespaces:
          names:
            - atlantis
            - opentelemetry-operator-system
            - argocd
            - ingress-nginx
            - loki
    relabel_configs:
      - source_labels:
          - __meta_kubernetes_pod_controller_name
        regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
        action: replace
        target_label: __tmp_controller_name
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_name
          - __meta_kubernetes_pod_label_app
          - __tmp_controller_name
          - __meta_kubernetes_pod_name
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: app
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_instance
          - __meta_kubernetes_pod_label_instance
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: instance
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_component
          - __meta_kubernetes_pod_label_component
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: component
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - namespace
        - app
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
      - action: replace
        regex: true/(.*)
        replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
        - __meta_kubernetes_pod_container_name
        target_label: __path__
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_label_dag_id
        target_label: dag

for my above promtail i converted using
alloy convert --source-format=promtail --output=parse_files_config.alloy promtail.yaml
and then deployed alloy using that config

System information

No response

Software version

No response

Configuration


Logs


@navnitkum navnitkum added the bug Something isn't working label Feb 24, 2025
@wildum
Copy link
Contributor

wildum commented Feb 25, 2025

hi, I replied to you in the community slack. From just the config and the logs it's hard to say what's wrong. I recommend checking the component's pages in the UI. (you can portforward it). This would tell you if it's discovering the targets properly. If the targets are ok, the next step would be to enable live debugging and see at different steps what's passing through the pipeline. See the doc here: https://grafana.com/docs/alloy/latest/troubleshoot/debug/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants