Links

Alert Destinations

Destinations are integrations that receive alerts from rules and policies in Panther

Overview

Destinations are integrations that receive alerts from rules and policies.
By default, alerts are routed based on severity and can dispatch to multiple destinations simultaneously. For example, a single alert might create a Jira ticket, create a PagerDuty incident, and send an email via Amazon Simple Notification Service (SNS).
You can override destinations on a per-rule or per-policy basis by setting the destination in the detection's Python function or its metadata. For a detailed explanation of how routing is determined, see Alert routing scenarios below.
Starting in version 1.42 and newer, Panther sends alerts from a known static IP address. This allows customers to configure destinations to accept alerts from this IP address. You can locate the address, listed as Gateway Public IP, in the Panther Console by navigating to Settings > General and scrolling to the bottom of the page.

How to configure destinations

Follow the pages below to learn how to set up specific alert destinations.

Supported destinations

Panther has supported integrations for the following destinations:

Setting up destinations that are not natively supported

If you'd like to receive alerts at a destination that isn't natively supported by Panther, consider using the Custom Webhook or API workflows.

Panther's Custom Webhook

Use Panther's Custom Webhook destination to reach additional third-parties (with APIs) such as Tines, TheHive, or SOCless.

Panther's API

If the destination you'd like to reach doesn't have a public API (such as an internal application), you can receive alerts by polling Panther's API for alerts on a schedule. See the available API operations for viewing and manipulating alerts on Alerts & Errors.
However, because alerts are fetched from Panther every n minutes or hours, say—rather than being sent to the destination as soon they are created, as with the Custom Webhook and supported integrations—this method could introduce a delay.
The Revelstoke integration is an example of this style of alert notification.

Modifying or deleting destinations

  1. 1.
    Log in to the Panther Console.
  2. 2.
    In the left sidebar menu, click Configure > Alert Destinations.
  3. 3.
    Click the triple dot icon on the right side of the destination.
    • In the dropdown menu that appears, click Delete to delete the destination.
    • Click Edit to modify the display name, the severity level, and other configurations.
    The triple dot icon in the right side of an alert is expanded, and an arrow points to the "Delete" option in the dropdown menu.

Alert routing scenarios

The destination(s) an alert is routed to depends on the destination configuration on the detection, if any, or the configuration on the destination. The routing scenarios are explained below, in their order of precedence from highest to lowest.
If you want a certain destination to only receive alerts from one specific detection, you can create a destination that contains no severity levels or log types, then configure the detection to point to that destination (using destinations(), OutputIds, or the Destination Overrides field). See Panther's KB article for more information: How do I route a single Panther alert to a specific alert destination?

Scenario 1: Dynamically defined destination(s) on the detection

A Python detection can define a destinations() function that determines which destination(s) should be alerted. Destinations defined in this way take precedence over all other configurations.
If there is no destinations() function defined in the detection's Python body, or if there is a destinations() function defined, but it returns an empty list, Panther will move on to Scenario 2, below, to find alert destinations.
If the list returned from the destinations() function includes "SKIP", the alert will not be routed to any destination. If other destination names/UUIDs are included in the returned list, they will be ignored.
A Python file shows a rule function as well as a destinations function. destinations includes a conditional statement, which either routes alerts to "slack-security-alerts", or "SKIP"

Scenario 2: Statically defined destination(s) on the detection

Static destination overrides can be defined either within a detection’s YAML file or in the Console:
  • In the CLI workflow, you can statically define destinations in a detection's YAML file, by setting the OutputIds field.
  • In the Console, destinations are defined within a detection's Rule Settings, using the Destination Overrides field.
"Overrides" means this method of destination definition takes precedence over Scenario 3, below.

Scenario 3: Destination configuration

If destinations are not defined on the detection (as described in Scenarios 1 and 2), the configurations on destinations themselves are invoked. In order for an alert to be routed to a given destination, the following must conditions must be met:
  • The Severity Levels configured on the destination must include the severity level of the alert.
    • Note that an alert's severity is either statically defined within the detection's Severity key, or dynamically defined by the detection's severity() function (for Python detections) or DynamicSeverities value (for YAML detections).
  • The Alert Types configured on the destination must include the type of the alert.
  • If the destination is configured to only accept alerts from certain Log Types, that list must include the log type associated with the alert.

Destination example

The following example demonstrates how to receive an alert to a destination based on a user's multiple failed login attempts to Okta.
You have configured the following:
  • Destinations:
    • Slack, configured to receive an alert for rule matches.
    • Tines (set up via Custom Webhook), configured to receive an alert for rule matches.
  • Log source:
    • Your Panther instance is ingesting Okta logs.
  • Detection:
    • You created a rule called “Okta User Locked Out,” to alert you when a user is locked out of Okta due to too many failed login attempts:
      from panther_base_helpers import deep_get
      def rule(event):
      return deep_get(event, 'outcome', 'reason') == 'LOCKED OUT'
      def title(event):
      return f"{deep_get(event, 'actor', 'alternateId')} is locked out."
      def destinations(event):
      if deep_get(event, 'actor', 'alternateId') == "[email protected]":
      return ['dev-alert-destinations', 'tines-okta-user-lock-out']
      return ['dev-general']
      def alert_context(event):
      return {
      "actor": deep_get(event, "actor", "displayName"),
      "id": deep_get(event, "actor", "id")
      }
    • The alert_context() contains the username and the user's Okta ID value.

1. An event occurs

One of your users unsuccessfully attempts to log in to Okta multiple times. Eventually their account is locked out.

2. Panther ingests logs and detects an event that matches the rule you configured

As the Okta audit logs stream through your Panther instance, your “Okta User Locked Out” rule detects that a user is locked out.
An alert in the Panther Console shows that a user is locked out. The triggered rule is called "Okta User Locked Out."

3. The rule match triggers an alert

The detected rule match triggers an alert to your Slack destination and to your Tines destination.
Within a few minutes of the event occurring, the alert appears in the Slack channel you configured as a destination:
A Slack app posts a Panther alert that says a user is locked out. The alert includes a link to the Panther UI, a Runbook that recommends verifying IPs, a Severity of Low, and Alert Context that includes the "actor" and "id" parameters.
The alert is also sent to Tines via a Custom Webhook you've configured as a destination. Tines receives the values from the alert_context() function, and it is set up to automatically unlock the user's Okta account then send a confirmation message in Slack.
The automated process in Tines shows the sequence of events: Receive Alert from Panther, Wait 10 minutes, Unlock Okta user by ID via HTTP Request, Send Unlock message to Slack via HTTP Request.

Destination schema

Workflow automation

The alert payload generally takes the following form. For custom webhooks, SNS, SQS, or other workflow automation-heavy destinations, this is important for defining how you process the alert.
For native integrations such as Jira or Slack, this is processed automatically into a form that the destination can understand.
{
"id": string,
"createdAt": AWSDateTime,
"severity": string,
"type": string,
"link": string,
"title": string,
"name": string,
"alertId": string,
"description": string,
"runbook": string,
"tags": [string],
"version": string
}
The AWSDateTime scalar type represents a valid extended ISO 8601 DateTime string. It accepts datetime strings of the form YYYY-MM-DDThh:mm:ss.sssZ. The field after the seconds field is a nanoseconds field. It can accept between 1 and 9 digits. The seconds and nanoseconds fields are optional. The time zone offset is compulsory for this scalar. The time zone offset must either be Z (representing the UTC time zone) or be in the format ±hh:mm:ss. The seconds field in the timezone offset will be considered valid even though it is not part of the ISO 8601 standard.

Example JSON payload:

{
"id": "AllLogs.IPMonitoring",
"createdAt": "2020-10-13T03:35:24Z",
"severity": "INFO",
"type": "RULE",
"link": "https://runpanther.io/alerts/b90c19e66e160e194a5b3b94ec27fb7c",
"title": "New Alert: Suspicious traffic detected from [123.123.123.123]",
"name": "Monitor Suspicious IPs",
"alertId": "b90c19e66e160e194a5b3b94ec27fb7c",
"description": "This rule alerts on any activity outside of our IP address whitelist",
"runbook": "",
"tags": [
"Network Monitoring",
"Threat Intel"
],
"version": "CJm9PiaXV0q8U0JhoFmE6L21ou7e5Ek0"
}

Troubleshooting alert destinations

Visit the Panther Knowledge Base to view articles about alert destinations that answer frequently asked questions and help you resolve common errors and issues.