This is necessary for investigations, baselining behaviors, writing rules, and generating analytics on logs in the context of days, weeks, or months of data.
Panther performs data normalization and processing to store the log data in a standard and efficient way in S3.
Additionally, other applications that can read data from S3 can access this data for search, business intelligence, redundancy, or anything else.
The following databases are available:
All data sent via Log Analysis, organized by log type
Events for all triggered alerts, organized by log type
Events for all errors from rules (e.g., Python tracebacks)
Standardized fields across all logs and rule matches
Panther cloud security scanning data
Panther data loader self-monitoring (Snowflake Only)
This is the main Panther database, holding parsed records of all the onboarded log types. The number and size of the tables here will vary depending on the sources you onboard. See a few sample queries here
For every onboarded source that appears in a rule match, Panther creates a row in the corresponding table in the rule matches database. This allows for an easy historical view over what rules are firing and why.
Sometimes, due to either incorrect code or permissions issues, a rule returns an error, and does not complete its run successfully. The rule errors tables keep track of any such events, for easy debugging.
The Panther team has worked hard to bring together common data fields that enable users to do searches across multiple data sources at once. These are exposed here as virtual tables or views.
The following views are available:
Search all data (logs, rule matches and errors)
Search all log data
Search all cloud security data
Search all events matching rules
Search all events causing rule errors
The Panther Cloud Security Database stores AWS configuration information and changes detected from the scans on the monitored environments.
Panther Monitor contains information about the data load process into Panther's Snowflake database itself. See the Snowflake Backend section for more details on this.
We are working hard to add to the power of the Panther Query Engine. We will be delivering ways to create:
Indicator Lists where users will be able to name and save their own indicator lists
Automated intel enrichment such as Tor Exit nodes, GeoIP, Cloud provider CIDR ranges.
Summary Tables that will enable uses to create baselines over large sets of data
Derivative Tables that will automatically process, filter and/or enrich your log data
Search optimization will deliver much faster search speeds
Pre-canned searches will provide you templates to speed up your work