Panther has 5 CloudWatch dashboards to provide visibility into the operation of the system:

  • PantherOverview An overview of all errors and performance of all Panther components.

  • PantherCloudSecurity: Details of the components monitoring infrastructure for CloudSecurity.

  • PantherAlertProcessing: Details of the components that relay alerts for CloudSecurity and Log Processing.

  • PantherLogAnalysis: Details of the components processing logs and running rules.

  • PantherRemediation: Details of the components that remediate infrastructure issues.


Panther uses CloudWatch Alarms to monitor the health of each component. Edit the deployments/panther_config.yml file to associate an SNS topic you have created with the Panther CloudWatch alarms to receive notifications. If this value is blank then Panther will associate alarms with the default Panther SNS topic called panther-alarms:

# This is the arn for the SNS topic you want associated with Panther system alarms.
# If this is not set alarms will be associated with the SNS topic `panther-alarms`.
AlarmSNSTopicARN: 'arn:aws:sns:us-east-1:05060362XXX:MyAlarmSNSTopic'

To configure alarms to send to your team, follow the guides below:

  • PagerDuty Integration

    NOTE: As of this writing (August 2020) Pager Duty cannot handle composite CloudWatch alarms which Panther uses to avoid duplicate pages to oncall staff. The work around is to use a Custom Event Transformer.

    Follow the instructions using the below code:

    var details = JSON.parse(PD.inputRequest.rawBody);
    var description = "unknown event";
    if ("AlarmDescription" in details) { // looks like a CloudWatch event ...
    var descLines = details.AlarmDescription.split("\n");
    description = (descLines.length > 1)? descLines[0] + " " + descLines[1] : descLines[0];
    var normalized_event = {
    event_type: PD.Trigger,
    description: description,
    incident_key: description,
    details: details

    Configure the SNS topic to use RawMessageDelivery: true when creating the Pager Duty subscription.

Assessing Data Ingest Volume

The Panther log analysis CloudWatch dashboard provides deep insight into operationally relevant aspects of log processing. In particular, understanding the ingest volume is critically important to forecast the cost of running Panther. One of the panes in the dashboard will show ingest volume by log type. This can be used, in combination with your AWS bill, to forecast costs as you scale your data. We suggest you use a month of data to estimate your costs.

The steps to view the dashboard:

  • Login to the AWS Console

  • Select CloudWatch from the Services menu

  • Select Dashboards from the left pane of the CloudWatch console

  • Select the dashboard beginning with PantherLogAnalysis

  • Select the vertical ... of the pane entitled Input MBytes (Uncompressed) by Log Type and select from the menu View in CloudWatch Insights

  • Set the time period for 4 weeks and click Apply

  • Click Run Query


Panther comes with some operational tools useful for managing the Panther infrastructure. These are statically compiled executables for linux, mac (AKA darwin) and windows. They can be copied/installed onto operational support hosts.

These tools require that the AWS credentials be set in the environment. We recommend a tool to manage these securely such as AWS Vault.

Running these commands with the -h flag will explain usage.

Both Devtools and Opstools are found at https://panther-community-us-east-1.s3.amazonaws.com/{version}/tools/{architecture}.zip

{version} is latest Panther version, e.g. v1.22.5

{architecture} is one of:

  • darwin-amd64

  • linux-amd64

  • linux-arm

  • windows-amd64

  • windows-arm

Each zip archive will contain both Ops and Dev tools


  • compact: backfill JSON to Parquet conversion of log data (used when upgrading to Panther Enterprise)

  • cost: generates cost reports using the costexplorer api

  • flushrsc: flush delete pending entries from the panther-resource table

  • gluerecover: scans S3 for missing AWS Glue partitions and recovers them

  • gluesync: update glue table and partition schemas

  • migrate: utility to do a data migration for the gsuite_reports table (log & rule table)

  • s3queue: list files under an S3 path and send them to the log processor input queue for processing (useful for back fill of data)

  • s3sns: lists s3 objects and posts s3 notifications to log processor sns topic

  • snowconfig: uses an account-admin enabled SF user to configure the databases and roles for the Panther users

  • snowcreate: uses the Panther Snowflake ORG admin account and credentials to create new Snowflake accounts

  • snowrepair: generates a ddl file to configure Snowflake to ingest Panther data

  • snowrotate: uses an account-admin enabled SF user to rotate the credentials for the two Panther users

  • sources: lists all log sources, optionally validates each log processing role can be assumed and data accessed


  • filegen: writes synthetic log files to s3 for use in benchmarking

  • logprocessor: run log processor locally for profiling purposes using pprof

  • pantherlog: Parse logs using built-in or custom schemas

An example of a full link to the set of tools would be: https://panther-community-us-east-1.s3.amazonaws.com/v1.22.5/tools/darwin-amd64.zip