Note
Help Wanted
This is a project is maintained by one developer. Please consider reviewing the open issues to see how you can contribute code, documentation, or user support. Assistance on the pinned issues would be particularly helpful.
Thanks to all contributors!
parsedmarc
is a Python module and CLI utility for parsing DMARC reports.
When used with Elasticsearch and Kibana (or Splunk), it works as a self-hosted
open source alternative to commercial DMARC report processing services such
as Agari Brand Protection, Dmarcian, OnDMARC, ProofPoint Email Fraud Defense,
and Valimail.
- Parses draft and 1.0 standard aggregate/rua reports
- Parses forensic/failure/ruf reports
- Can parse reports from an inbox over IMAP, Microsoft Graph, or Gmail API
- Transparently handles gzip or zip compressed reports
- Consistent data structures
- Simple JSON and/or CSV output
- Optionally email the results
- Optionally send the results to Elasticsearch and/or Splunk, for use with premade dashboards
- Optionally send reports to Apache Kafka
- Demystifying DMARC - A complete guide to SPF, DKIM, and DMARC
If you are looking for SPF and DMARC record validation and parsing, check out the sister project, checkdmarc.
DMARC protects against domain spoofing, not lookalike domains. For open source lookalike domain monitoring, check out DomainAware.
usage: parsedmarc [-h] [-c CONFIG_FILE] [--strip-attachment-payloads] [-o OUTPUT] [--aggregate-json-filename AGGREGATE_JSON_FILENAME] [--forensic-json-filename FORENSIC_JSON_FILENAME] [--aggregate-csv-filename AGGREGATE_CSV_FILENAME] [--forensic-csv-filename FORENSIC_CSV_FILENAME] [-n NAMESERVERS [NAMESERVERS ...]] [-t DNS_TIMEOUT] [--offline] [-s] [--verbose] [--debug] [--log-file LOG_FILE] [-v] [file_path ...] Parses DMARC reports positional arguments: file_path one or more paths to aggregate or forensic report files, emails, or mbox files' optional arguments: -h, --help show this help message and exit -c CONFIG_FILE, --config-file CONFIG_FILE a path to a configuration file (--silent implied) --strip-attachment-payloads remove attachment payloads from forensic report output -o OUTPUT, --output OUTPUT write output files to the given directory --aggregate-json-filename AGGREGATE_JSON_FILENAME filename for the aggregate JSON output file --forensic-json-filename FORENSIC_JSON_FILENAME filename for the forensic JSON output file --aggregate-csv-filename AGGREGATE_CSV_FILENAME filename for the aggregate CSV output file --forensic-csv-filename FORENSIC_CSV_FILENAME filename for the forensic CSV output file -n NAMESERVERS [NAMESERVERS ...], --nameservers NAMESERVERS [NAMESERVERS ...] nameservers to query -t DNS_TIMEOUT, --dns_timeout DNS_TIMEOUT number of seconds to wait for an answer from DNS (default: 2.0) --offline do not make online queries for geolocation or DNS -s, --silent only print errors and warnings --verbose more verbose output --debug print debugging information --log-file LOG_FILE output logging to a file -v, --version show program's version number and exit
Note
Starting in parsedmarc
7.1.1, a static copy of the IP to Country Lite database from IPDB is
distributed with parsedmarc
, under the terms of the Creative Commons Attribution 4.0 International License. as
a fallback if the MaxMind GeoLite2 Country database is not installed However, parsedmarc
cannot install updated
versions of these databases as they are released, so MaxMind's databases and geoipupdate tool is still the
preferable solution.
The location of the database file can be overridden by using the ip_db_path
setting.
Note
Starting in parsedmarc
6.0.0, most CLI options were moved to a configuration file, described below.
parsedmarc
can be configured by supplying the path to an INI file
parsedmarc -c /etc/parsedmarc.ini
For example
# This is an example comment
[general]
save_aggregate = True
save_forensic = True
[imap]
host = imap.example.com
user = [email protected]
password = $uperSecure
[mailbox]
watch = True
delete = False
[elasticsearch]
hosts = 127.0.0.1:9200
ssl = False
[splunk_hec]
url = https://splunkhec.example.com
token = HECTokenGoesHere
index = email
[s3]
bucket = my-bucket
path = parsedmarc
[syslog]
server = localhost
port = 514
The full set of configuration options are:
Note
%
characters must be escaped with another %
character, so use %%
wherever a %` character is used, such as a password.
general
save_aggregate
- bool: Save aggregate report data to Elasticsearch, Splunk and/or S3save_forensic
- bool: Save forensic report data to Elasticsearch, Splunk and/or S3strip_attachment_payloads
- bool: Remove attachment payloads from resultsoutput
- str: Directory to place JSON and CSV files inaggregate_json_filename
- str: filename for the aggregate JSON output fileforensic_json_filename
- str: filename for the forensic JSON output fileip_db_path
- str: An optional custom path to a MMDB file from MaxMind or DBIPoffline
- bool: Do not use online queries for geolocation or DNSnameservers
- str: A comma separated list of DNS resolvers (Default: Cloudflare's public resolvers)dns_timeout
- float: DNS timeout perioddebug
- bool: Print debugging messagessilent
- bool: Only print errors (Default: True)log_file
- str: Write log messages to a file at this pathn_procs
- int: Number of process to run in parallel when parsing in CLI mode (Default: 1)chunk_size
- int: Number of files to give to each process when running in parallel.
Note
Setting this to a number larger than one can improve performance when processing thousands of files
mailbox
reports_folder
- str: The mailbox folder (or label for Gmail) where the incoming reports can be found (Default: INBOX)archive_folder
- str: The mailbox folder (or label for Gmail) to sort processed emails into (Default: Archive)watch
- bool: Use the IMAPIDLE
command to process messages as they arrive or poll MS Graph for new messagesdelete
- bool: Delete messages after processing them, instead of archiving themtest
- bool: Do not move or delete messagesbatch_size
- int: Number of messages to read and process before saving. Default 10. Use 0 for no limit.
imap
host
- str: The IMAP server hostname or IP addressport
- int: The IMAP server port (Default: 993).
Note
Starting in version 8.0.0, most options from the
imap
section have been moved to themailbox
section.Note
If your host recommends another port, still try 993
ssl
- bool: Use an encrypted SSL/TLS connection (Default: True)skip_certificate_verification
- bool: Skip certificate verification (not recommended)user
- str: The IMAP userpassword
- str: The IMAP password
msgraph
auth_method
- str: Authentication method, valid types are UsernamePassword, DeviceCode, or ClientSecret (Default: UsernamePassword).user
- str: The M365 user, required when the auth method is UsernamePasswordpassword
- str: The user password, required when the auth method is UsernamePasswordclient_id
- str: The app registration's client IDclient_secret
- str: The app registration's secrettenant_id
- str: The Azure AD tenant ID. This is required for all auth methods except UsernamePassword.mailbox
- str: The mailbox name. This defaults to the current user if using the UsernamePassword auth method, but could be a shared mailbox if the user has access to the mailbox
Note
You must create an app registration in Azure AD and have an admin grant the Microsoft Graph
Mail.ReadWrite
(delegated) permission to the app. If you are using UsernamePassword auth and the mailbox is different from the username, you must grant the appMail.ReadWrite.Shared
.Warning
If you are using the ClientSecret auth method, you need to grant the
Mail.ReadWrite
(application) permission to the app. You must also restrict the application's access to a specific mailbox since it allows all mailboxes by default. Use theNew-ApplicationAccessPolicy
command in the Exchange PowerShell module. If you need to scope the policy to shared mailboxes, you can add them to a mail enabled security group and use that as the group id.New-ApplicationAccessPolicy -AccessRight RestrictAccess -AppId "<CLIENT_ID>" -PolicyScopeGroupId "<MAILBOX>" -Description "Restrict access to dmarc reports mailbox."
elasticsearch
hosts
- str: A comma separated list of hostnames and ports or URLs (e.g.127.0.0.1:9200
orhttps://user:secret@localhost
)
Note
Special characters in the username or password must be URL encoded.
ssl
- bool: Use an encrypted SSL/TLS connection (Default: True)cert_path
- str: Path to a trusted certificatesindex_suffix
- str: A suffix to apply to the index namesmonthly_indexes
- bool: Use monthly indexes instead of daily indexesnumber_of_shards
- int: The number of shards to use when creating the index (Default: 1)number_of_replicas
- int: The number of replicas to use when creating the index (Default: 1)
splunk_hec
url
- str: The URL of the Splunk HTTP Events Collector (HEC)token
- str: The HEC tokenindex
- str: The Splunk index to useskip_certificate_verification
- bool: Skip certificate verification (not recommended)
kafka
hosts
- str: A comma separated list of Kafka hostsuser
- str: The Kafka userpasssword
- str: The Kafka passwordssl
- bool: Use an encrypted SSL/TLS connection (Default: True)skip_certificate_verification
- bool: Skip certificate verification (not recommended)aggregate_topic
- str: The Kafka topic for aggregate reportsforensic_topic
- str: The Kafka topic for forensic reports
smtp
host
- str: The SMTP hostnameport
- int: The SMTP port (Default: 25)ssl
- bool: Require SSL/TLS instead of using STARTTLSskip_certificate_verification
- bool: Skip certificate verification (not recommended)user
- str: the SMTP usernamepassword
- str: the SMTP passwordfrom
- str: The From header to use in the emailto
- list: A list of email addresses to send tosubject
- str: The Subject header to use in the email (Default: parsedmarc report)attachment
- str: The ZIP attachment filenamesmessage
- str: The email message (Default: Please see the attached parsedmarc report.)
s3
bucket
- str: The S3 bucket namepath
- str: The path to upload reports to (Default: /)region_name
- str: The region name (Optional)endpoint_url
- str: The endpoint URL (Optional)access_key_id
- str: The access key id (Optional)secret_access_key
- str: The secret access key (Optional)
syslog
server
- str: The Syslog server name or IP addressport
- int: The UDP port to use (Default: 514)
gmail_api
gmail_api_credentials_file
- str: Path to file containing the credentials, None to disable (Default: None)gmail_api_token_file
- str: Path to save the token file (Default: .token)gmail_api_include_spam_trash
- bool: Include messages in Spam and Trash when searching reports (Default: False)gmail_api_scopes
- str: Comma separated list of scopes to use when acquiring credentials (Default: https://www.googleapis.com/auth/gmail.modify)
Warning
It is strongly recommended to not use the nameservers
setting.
By default, parsedmarc
uses Cloudflare's public resolvers,
which are much faster and more reliable than Google, Cisco OpenDNS, or
even most local resolvers.
The nameservers
option should only be used if your network blocks DNS
requests to outside resolvers.
Warning
save_aggregate
and save_forensic
are separate options because
you may not want to save forensic reports (also known as failure reports)
to your Elasticsearch instance, particularly if you are in a
highly-regulated industry that handles sensitive data, such as healthcare
or finance. If your legitimate outgoing email fails DMARC, it is possible
that email may appear later in a forensic report.
Forensic reports contain the original headers of an email that failed a DMARC check, and sometimes may also include the full message body, depending on the policy of the reporting organization.
Most reporting organizations do not send forensic reports of any kind for privacy reasons. While aggregate DMARC reports are sent at least daily, it is normal to receive very few forensic reports.
An alternative approach is to still collect forensic/failure/ruf reports
in your DMARC inbox, but run parsedmarc
with save_forensic = True
manually on a separate IMAP folder (using the reports_folder
option),
after you have manually moved known samples you want to save to that
folder (e.g. malicious samples and non-sensitive legitimate samples).
docker run -v "${PWD}/config.ini:/config.ini" ghcr.io/domainaware/parsedmarc:<version> -c /config.ini
Here are the results from parsing the example report from the dmarc.org wiki. It's actually an older draft of the the 1.0 report schema standardized in RFC 7480 Appendix C. This draft schema is still in wide use.
parsedmarc
produces consistent, normalized output, regardless of the report
schema.
{
"xml_schema": "draft",
"report_metadata": {
"org_name": "acme.com",
"org_email": "[email protected]",
"org_extra_contact_info": "http://acme.com/dmarc/support",
"report_id": "9391651994964116463",
"begin_date": "2012-04-27 20:00:00",
"end_date": "2012-04-28 19:59:59",
"errors": []
},
"policy_published": {
"domain": "example.com",
"adkim": "r",
"aspf": "r",
"p": "none",
"sp": "none",
"pct": "100",
"fo": "0"
},
"records": [
{
"source": {
"ip_address": "72.150.241.94",
"country": "US",
"reverse_dns": "adsl-72-150-241-94.shv.bellsouth.net",
"base_domain": "bellsouth.net"
},
"count": 2,
"alignment": {
"spf": true,
"dkim": false,
"dmarc": true
},
"policy_evaluated": {
"disposition": "none",
"dkim": "fail",
"spf": "pass",
"policy_override_reasons": []
},
"identifiers": {
"header_from": "example.com",
"envelope_from": "example.com",
"envelope_to": null
},
"auth_results": {
"dkim": [
{
"domain": "example.com",
"selector": "none",
"result": "fail"
}
],
"spf": [
{
"domain": "example.com",
"scope": "mfrom",
"result": "pass"
}
]
}
}
]
}
xml_schema,org_name,org_email,org_extra_contact_info,report_id,begin_date,end_date,errors,domain,adkim,aspf,p,sp,pct,fo,source_ip_address,source_country,source_reverse_dns,source_base_domain,count,spf_aligned,dkim_aligned,dmarc_aligned,disposition,policy_override_reasons,policy_override_comments,envelope_from,header_from,envelope_to,dkim_domains,dkim_selectors,dkim_results,spf_domains,spf_scopes,spf_results draft,acme.com,[email protected],http://acme.com/dmarc/support,9391651994964116463,2012-04-27 20:00:00,2012-04-28 19:59:59,,example.com,r,r,none,none,100,0,72.150.241.94,US,adsl-72-150-241-94.shv.bellsouth.net,bellsouth.net,2,True,False,True,none,,,example.com,example.com,,example.com,none,fail,example.com,mfrom,pass
Thanks to Github user xennn for the anonymized forensic report email sample.
{
"feedback_type": "auth-failure",
"user_agent": "Lua/1.0",
"version": "1.0",
"original_mail_from": "[email protected]",
"original_rcpt_to": "[email protected]",
"arrival_date": "Mon, 01 Oct 2018 11:20:27 +0200",
"message_id": "<38.E7.30937.BD6E1BB5@ mailrelay.de>",
"authentication_results": "dmarc=fail (p=none, dis=none) header.from=domain.de",
"delivery_result": "policy",
"auth_failure": [
"dmarc"
],
"reported_domain": "domain.de",
"arrival_date_utc": "2018-10-01 09:20:27",
"source": {
"ip_address": "10.10.10.10",
"country": null,
"reverse_dns": null,
"base_domain": null
},
"authentication_mechanisms": [],
"original_envelope_id": null,
"dkim_domain": null,
"sample_headers_only": false,
"sample": "Received: from Servernameone.domain.local (Servernameone.domain.local [10.10.10.10])\n\tby mailrelay.de (mail.DOMAIN.de) with SMTP id 38.E7.30937.BD6E1BB5; Mon, 1 Oct 2018 11:20:27 +0200 (CEST)\nDate: 01 Oct 2018 11:20:27 +0200\nMessage-ID: <38.E7.30937.BD6E1BB5@ mailrelay.de>\nTo: <[email protected]>\nfrom: \"=?utf-8?B?SW50ZXJha3RpdmUgV2V0dGJld2VyYmVyLcOcYmVyc2ljaHQ=?=\" <[email protected]>\nSubject: Subject\nMIME-Version: 1.0\nX-Mailer: Microsoft SharePoint Foundation 2010\nContent-Type: text/html; charset=utf-8\nContent-Transfer-Encoding: quoted-printable\n\n<html><head><base href=3D'\nwettbewerb' /></head><body><!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2//EN\"=\n><HTML><HEAD><META NAME=3D\"Generator\" CONTENT=3D\"MS Exchange Server version=\n 08.01.0240.003\"></html>\n",
"parsed_sample": {
"from": {
"display_name": "Interaktive Wettbewerber-Übersicht",
"address": "[email protected]",
"local": "sharepoint",
"domain": "domain.de"
},
"to_domains": [
"domain.de"
],
"to": [
{
"display_name": null,
"address": "[email protected]",
"local": "peter.pan",
"domain": "domain.de"
}
],
"subject": "Subject",
"timezone": "+2",
"mime-version": "1.0",
"date": "2018-10-01 09:20:27",
"content-type": "text/html; charset=utf-8",
"x-mailer": "Microsoft SharePoint Foundation 2010",
"body": "<html><head><base href='\nwettbewerb' /></head><body><!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2//EN\"><HTML><HEAD><META NAME=\"Generator\" CONTENT=\"MS Exchange Server version 08.01.0240.003\"></html>",
"received": [
{
"from": "Servernameone.domain.local Servernameone.domain.local 10.10.10.10",
"by": "mailrelay.de mail.DOMAIN.de",
"with": "SMTP id 38.E7.30937.BD6E1BB5",
"date": "Mon, 1 Oct 2018 11:20:27 +0200 CEST",
"hop": 1,
"date_utc": "2018-10-01 09:20:27",
"delay": 0
}
],
"content-transfer-encoding": "quoted-printable",
"message-id": "<38.E7.30937.BD6E1BB5@ mailrelay.de>",
"has_defects": false,
"headers": {
"Received": "from Servernameone.domain.local (Servernameone.domain.local [10.10.10.10])\n\tby mailrelay.de (mail.DOMAIN.de) with SMTP id 38.E7.30937.BD6E1BB5; Mon, 1 Oct 2018 11:20:27 +0200 (CEST)",
"Date": "01 Oct 2018 11:20:27 +0200",
"Message-ID": "<38.E7.30937.BD6E1BB5@ mailrelay.de>",
"To": "<[email protected]>",
"from": "\"Interaktive Wettbewerber-Übersicht\" <[email protected]>",
"Subject": "Subject",
"MIME-Version": "1.0",
"X-Mailer": "Microsoft SharePoint Foundation 2010",
"Content-Type": "text/html; charset=utf-8",
"Content-Transfer-Encoding": "quoted-printable"
},
"reply_to": [],
"cc": [],
"bcc": [],
"attachments": [],
"filename_safe_subject": "Subject"
}
}
feedback_type,user_agent,version,original_envelope_id,original_mail_from,original_rcpt_to,arrival_date,arrival_date_utc,subject,message_id,authentication_results,dkim_domain,source_ip_address,source_country,source_reverse_dns,source_base_domain,delivery_result,auth_failure,reported_domain,authentication_mechanisms,sample_headers_only auth-failure,Lua/1.0,1.0,,[email protected],[email protected],"Mon, 01 Oct 2018 11:20:27 +0200",2018-10-01 09:20:27,Subject,<38.E7.30937.BD6E1BB5@ mailrelay.de>,"dmarc=fail (p=none, dis=none) header.from=domain.de",,10.10.10.10,,,,policy,dmarc,domain.de,,False
Please report bugs on the GitHub issue tracker