-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not recreate a new counter if we already have one. #13
Conversation
This is by design. It's different between Might have to reconsider this though, seeing how |
Yah, totally see now that advice only applied to I took a look at the golang Prometheus client library, and looks like it'll throw an package main
import (
"flag"
"log"
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var addr = flag.String("listen-address", ":9001", "The address to listen on for HTTP requests.")
var (
myCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "my_counter",
Help: "The total number of processed events",
})
)
func main() {
flag.Parse()
go func() {
for {
myCounter.Inc()
time.Sleep(1 * time.Second)
}
}()
myCounterTwo := promauto.NewCounter(prometheus.CounterOpts{
Name: "my_counter",
Help: "The total number of processed events",
})
go func() {
for {
myCounterTwo.Inc()
time.Sleep(1 * time.Second)
}
}()
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(*addr, nil))
}
But the docs are pretty interesting, since they do suggest you can catch the error and use the existing collector. Testing out the example, it does indeed give you back a reference to the first counter. Note that package main
import (
"flag"
"log"
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var addr = flag.String("listen-address", ":9001", "The address to listen on for HTTP requests.")
var (
myCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "my_counter",
Help: "The total number of processed events",
})
)
func main() {
flag.Parse()
go func() {
for {
myCounter.Inc()
time.Sleep(1 * time.Second)
}
}()
myCounterTwo := prometheus.NewCounter(prometheus.CounterOpts{
Name: "my_counter",
Help: "The total number of processed events",
})
if err := prometheus.Register(myCounterTwo); err != nil {
if are, ok := err.(prometheus.AlreadyRegisteredError); ok {
// A counter for that metric has been registered before.
// Use the old counter from now on.
myCounterTwo = are.ExistingCollector.(prometheus.Counter)
} else {
// Something else went wrong!
panic(err)
}
}
go func() {
for {
myCounterTwo.Inc()
time.Sleep(1 * time.Second)
}
}()
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(*addr, nil))
} I think the context at the top of So based on this spelunking on what I'd consider the "reference" implementation, I think we should throw an Exception either way. However, that leaves me with two questions:
|
To clarify: Why are you not using the public func makeCounter(label: String, dimensions: [(String, String)]) -> CounterHandler {
let createHandler = { (counter: PromCounter) -> CounterHandler in
return MetricsCounterHandler(counter: counter, dimensions: dimensions)
}
// vvvvvvv the "right" thing / "semantics you expected"
if let counter: PromCounter<Int64, DimensionLabels> = self.getMetricInstance(with: label, andType: .counter) {
return createHandler(counter)
}
// ^^^^^^^
return createHandler(self.createCounter(forType: Int64.self, named: label, withLabelType: DimensionLabels.self))
} Why not use the metrics API as a fascade there as it was intended? This library IS an implementation of the metrics SPI, and thus this works: let c1 = Counter(label: "one")
c1.increment()
let c2 = Counter(label: "one")
c2.increment() why not use it, rather than bind yourself into the prometheus API directly?
No, I don't think so. One cannot compare Go's panic to fatal erroring in Swift. Panics are recoverable, Swift faults are not. Throwing an error sounds good if the same metrics is attempted to be created.
By "auto" you mean "return me the same instance if it existed already"? That's what the SwiftMetrics API would do; What am I missing? // nitpick, let's not use the word panic, mixes up things a bit IMHO... Swift does not do panics, Go does, and they're recoverable. |
I think there are valid usecases in an environment where Prometheus is such a standard that using this package directly is fine and has some benefits. For instance, using That said, the goal for this issue is to track down the expected behavior when using Any opposition to throwing an error that also includes a reference to the existing metric? That way callers can surface the collision just in case, but have access to use it if that's acceptable.
Yikes, you're totally right. I definitely assumed
Me too, I was surprised a bit as well.
|
5d87049
to
2fe6b12
Compare
I played around with this a bit, and it "feels" better to follow the |
in one of the early iteration of swift-metrics, we did that for "free" as part of the API package (caching basically) but we decided to remove it and leave it to the implementation to decide. imo, the expected behavior when type and other arguments align is to return the existing one, but we wanted to leave the door open to other logic |
@tomerd ah, that's good context and totally makes sense. Given the behavior of the For now I think keeping this simple is fine, but I'm definitely open to other feedback too. |
1ae094c
to
f4037ad
Compare
Looks like this needed a rebase off master 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks @Yasumoto
Thanks, folks! 🎊 |
@MrLotU do you mind adding this to the 1.0 milestone? I would but not sure I have the permissions. 🙇♂️ |
Done. Good catch |
Checklist
Motivation and Context
We want to make sure we're reusing metrics, not losing ones we already create.
Description
I discovered when running a service that I'm getting multiple invocations of the same metric appearing when I
collect()
:I'm calling
createCounter
on each request based on the good advice I got here from @ktoso.If this looks roughly right to y'all, I can update the other types to follow this pattern, though definitely open to feedback 🎉