-
Notifications
You must be signed in to change notification settings - Fork 362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Added document to explain measurements in RITA #482
base: master
Are you sure you want to change the base?
Conversation
* Implementation of limit flag * Implementation of no-limit flag
What you have is good too. I don't think the interval score and data size score are actually displayed in the beacon output, but we do store those in the database. I'm not sure if we should explain those in the doc or not. But I would like something that goes through and literally explains the column titles in the show-* commands. Maybe something like this could go under each section in the document (e.g. "## Beacons") before each of the in depth explanations. Each list item could be a link (if applicable) to the relevant section in the document or have a short description if there is not a section. If you need help figuring out how to get the links right let me know. I think there are examples in our other docs though. The following are the column titles you will see in the output of
|
Update the link to documentation about Security Onion with RITA to point to the new version of Security Onion's wiki.
* Runs tests automatically * Uploads release artifacts automatically
* fixed icert recording error and max beacon error * fixed updating chunks containing same info as previous chunk * limit icert OrigIps
Clarify docs based on user feedback. Update docker documentation. Add Ubuntu 18.04 to list of supported operating systems.
As I dug through the code and tried to make sense of the several indicators and scores used in the analyzer I really wished I had some documentation to back my intuitions mostly regarding the choices made. Why pick a 30 seconds in the computation of the Anyways, I think the indicators are all straightforward. Just the scores might need some explanation. |
@Spriithy Thanks for the feedback! To answer your question about these lines,
I'm not entirely sure how 30 seconds and 32 bytes were picked. It could be they were just arbitrary choices that tended to work well. |
Thanks for the feedback ! Is there some sort of roadmap for the project ? Anything maybe we could contribute to ? |
No public roadmap :( But any issue marked "good first issue" would be very helpful and contributions would be welcome. If you're interested in any of them just start commenting on the issue with questions or a proposed solution. |
* Adding support for Zeek JSON logs * Support multiple time formats * Adding support for corelight/json-streaming logs
* Change RITA build instructions from source to use https for git clone instead of SSH to avoid permission denied error * Correct _Gittiquette Summary_ note in Contributing.md; repoint repo to fork using git remote set-url, not git remote add
I updated the doc I'd written before to hopefully explain some of the headers better, but didn't have a chance to get it proof read. I'm not totally sure if it's what you were hoping for @Spriithy so please feel free (but not obligated by any means) to give any feedback (and apologies for taking so long to get back to this!) |
I think an explanation is still needed to explain what is a good / bad score of beacon.
|
Added a document that explains measurements in RITA
Based on documents I was given I wasn't sure if we wanted a document like this, or one that literally went through and explained the column titles given in show-* commands, if we wanted something else I can rework this into that easily