feat: provide log retrieval commands #5
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
d526a54 feat: provide log retrieval commands
Log retrieval has two components: obtaining the files from S3 and reassembling them. There is a
separate subcommand for each.
Obtaining the files is a recursive process that involves visiting the hierarchy of folders in the
bucket. Since there are lots of folders, this can be time consuming for a large testnet.
Logstash is forwarding the files to the bucket in parts, every 30 minutes. For this reason, they
need to be reassembled back into a single file. This is based on the part numbers in the file names.
The use of different GUIDs on each file part is done intentionally by Logstash.
The reason these have been implemented as separate subcommands is so that we can continually sync
the logs from S3 if that is necessary, without re-downloading the parts we've already retrieved. The
reassembly process removes the original part files.
There has also been some simple logging enabled in this commit, just using the
env_logger
crate,since we don't really need anything sophisticated for this program.
b15451a chore: remove angus from key list
dedd402 feat: provide log copy command
Use an Ansible playbook to copy logs from all the remote machines to the
logs
directory on thelocal machine. There are a few redundant directories that get copied as part of the directory
structure, so there's a Python script that clears these up.
This also switches on trace logging for the services by using
SN_LOG=all
.7a9f824 feat: provide logs rm command
Remove the logs from a previous testnet run by deleting the entire folder from S3.
When a new testnet is created we check to see if logs from a previous run with the same name already
exist. We won't proceed unless the user deletes those logs. We offer the choice to delete or
retrieve them before deletion.
If we did proceed, the logs from the two testnets would be intermingled with each other and this
would lead to a confusing situation.