-
Hello, community! I have a question regarding some VRL features. There is a problem with complicated nested JSON objects using the Elasticsearch sink. I use a simple remap rule to parse messages into objects - There is no way to change the users' behavior because log collecting is provided as a service. I'd like to protect a message parsing depth somehow to prevent Vector/Elasticsearch overloading. The example: {
"1": {
"2": {
"3": {
"4": {
"5": {
"6": "finish"
}
}
}
}
}
} I want have a message converted into something like this (docs no more profound than five levels deep are allowed): {
"1": {
"2": {
"3": {
"4": {
"5": "{\"6\": \"finish\"}"
}
}
}
}
}
Are there any workarounds for that case? What would you suggest? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @nabokihms ! I wonder if
cc/ @JeanMertz this could be another iteration use-case? |
Beta Was this translation helpful? Give feedback.
Hi @nabokihms !
I wonder if
flatten
could help here? https://vector.dev/docs/reference/vrl/functions/#flatten . I'm not thinking of an easy way to do that only after a certain depth in VRL though. I think we could enhanceflatten
with that functionality though. Would that work for you? Otherwise, we could definitely add another function or option toparse_json
to enable this.lua
is much slower, but is an option. To workaround the single thread issue, you could partition the data and run it through Nlua
transforms, joining it back together afterwards. We do have some plans to allowlua
and other "stateful" transforms run in parallel in the near-ish future.cc/ @JeanMertz this could be an…