You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have many log messages. We generate 10GB of messages per day
Run papertrail -f
Observe that there are occasional gaps in the messages of 1-2 seconds. For example, we'll see a message from 12:01:01, followed by a message from 12:01:03 (without any of the messages from 12:01:02).
I assume this is by design! I'm guessing that if there are a ton of messages, you didn't want to overwhelm the servers or delay the CLI with too much data.
Regardless, I'd like a realtime (or near realtime) firehose to parse. What is the best way to get that data? My ideas:
Use the "archive to s3" function, but that forces a delay of 1-2 hours and is unusable for this project
Manually "chunk" the data on my side, by requesting 5 minutes of data at a time (so at 12:05, I request the data for 12:00 til 12:05, etc)
...?
Is there any way to get papertrail -f to stop dropping messages? If not, how would you develop a realtime-ish system?
The text was updated successfully, but these errors were encountered:
I ended up going with option 2 - it's working well!
I have a cron job that runs every minute. That cron job kicks off a python script. The python script calls the papertrail CLI to get all the log messages from the previous minute and dumps them to file.
One annoyance is that the CLI doesn't format messages the same as Papertrail's s3 archiver. We need to convert the log messages to a matching format manually.
We're also experiencing this issue, but manually chunking it isn't viable for tailing with the amount of logs we have (not to mention it delays our response time).
Here's a snippet of the only line numbers output by papertrail-cli for a 15 frame stack trace:
PHP 4. ...
PHP 11. ...
PHP 12. ...
And then a minute later, another trace has massive gaps too:
Steps to reproduce:
papertrail -f
I assume this is by design! I'm guessing that if there are a ton of messages, you didn't want to overwhelm the servers or delay the CLI with too much data.
Regardless, I'd like a realtime (or near realtime) firehose to parse. What is the best way to get that data? My ideas:
Is there any way to get
papertrail -f
to stop dropping messages? If not, how would you develop a realtime-ish system?The text was updated successfully, but these errors were encountered: