Snowflake Integration

Panther can be configured to write processed log data to one or more AWS-based Snowflake database clusters. This allows you to join Panther processed data with your other data sources in Snowflake.

Integrating Panther with Snowflake enables Panther data to be used in your Business Intelligence tools to make dashboards tailored to you operations. In addition, you can join Panther data (e.g., Panther alerts) to your business data, enabling assessment of your security posture with respect to your organization.

For example, you can tally alerts by organizational division (e.g., Human Resources) or by infrastructure (e.g., Development, Test, Production).

Panther uses Snowpipe to copy the data into your Snowflake cluster.

Configuration Overview

There are three parts to configuring Panther to integrate with Snowflake.

Part 1: Configure the Panther AWS Secret for access by user interface

Part 2: Configure Panther to ingest data into Snowflake

Part 3: Update permissions and test Panther user interface with Snowflake

There is a bug in Snowflake as of this writing where dropping the Snowpipe configuration does not clean up the registered AWS references in Snowflake. This means that dropping the Snowflake configuration, tearing down Panther and redeploying Panther will cause Snowflake to not "see" the new Panther infrastructure. This can be very difficult to troubleshoot. The solution is to call Snowflake customer support and they will purge the references to the previous Snowpipe AWS references.

Configure the Panther User Interface with an AWS Secret for access

Step 1: Create Snowflake user

You will need to create a Snowflake user with read-only permissions in your Snowflake account.

You can do this by pasting the below commands in your Snowflake SQL shell. Be sure to replace <your_password> with a secure password and <your_warehouse> with the name of your Snowflake Warehouse.

create user panther_readonly password='<your_password>';
create role panther_readonly_role;
grant role panther_readonly_role
to user panther_readonly;
alter user panther_readonly
set default_role = panther_readonly_role;
grant role public to role panther_readonly_role;
grant usage
on warehouse <your_warehouse>
to role panther_readonly_role;
alter user set DEFAULT_WAREHOUSE = '<your_warehouse>';

Step 2: Create a KMS key in your AWS account

You will use this key to encrypt the Snowflake secrets that we will store in your AWS account as part of Step 3.

  1. Log in to your AWS account

  2. (Optional) Go to the same region that your Snowflake account is in

  3. Go to KMS service

  4. Click on Create a key

  5. Pick Symmetric for the type and click Next

  6. Set the alias to panther-secret. Click Next. On the next page Click Next (accept defaults)

  7. Click on Add another AWS Account and enter the account id where Panther is installed.

  8. Click Next and then click Finish.

Step 3: Store Snowflake user password in AWS Secrets Manager

You will use AWS Secrets Manager to store the Snowflake user password. It will be configured to only allow access from a single lambda function in the Panther account.

  1. Access the AWS Secrets Manager via the console and select Store a New Secret button on the page.

  2. You will be presented with a page titled Store a new secret. Select Other type of secrets from the list of types. Specify the following key/value pairs:

Field

Description

account

The name of your Snowflake account. It can be found by executing SELECT CURRENT_ACCOUNT()

user

panther_readonly or Snowflake user you created in Step 1

password

The Snowflake user password that you created in Step 1

host

This is usually <something>.snowflakecomputing.com from the URL used to log into your cluster.

port

Use 443 unless you have configured differently

warehouse

The name of your Snowflake active warehouse

  1. Select panther-secret from the dropdown under Select the encryption key.

  2. Click Next.

  3. You will be presented with a screen asking for the name and description of the secret. Fill these in and click Next.

  4. The next screen concerns autorotation, just click the Next button.

  5. Finally you will be presented with an overview screen. Scroll to the bottom and click the Store button.

Make a note of the arn for the secret. We will use this later.

Configure Data Ingest into Snowflake

In order to configure Panther, you need to get the SNOWFLAKE_IAM_USER from Snowflake.

In a Snowflake SQL shell execute the below sql, replacing myaccountid with your AWS account ID and myaccountregion with the account's region:

SELECT system$get_aws_sns_iam_policy('arn:aws:sns:myaccountregion:myaccountid:panther-processed-data-notifications');

You should see a response of:

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"1",
"Effect":"Allow",
"Principal":{
"AWS":"arn:aws:iam::34318291XXXX:user/k7m2-s-v2st0722"
},
"Action":["sns:Subscribe"],
"Resource":["arn:aws:sns:myaccountregion:myaccoundid:panther-processed-data-notifications"]
}
]
}

In the above example, the SNOWFLAKE_IAM_USER is the AWS attribute arn:aws:iam::34318291XXXX:user/k7m2-s-v2st0722. Keep this handy, we will use this later.

If you are deploying from source edit your deployments/panther_config.yml.

Update SecretsManagerARN in the panther_config.yml file using the arn of your created secret.

# Setting this configures SnowflakeAPI allowing the Data Explorer and scheduled queries to
# access data in Snowflake. This is the ARN of a secret in AWS Secrets Manager with the following attributes:
# * account
# * user
# * password
# * host
# * port
# For example:
# SecretsManagerARN: arn:aws:secretsmanager:us-east-2:05060362XXXX:secret:panther-snowflake-secret-x1CT28
SecretsManagerARN: arn:aws:secretsmanager:eu-central-1:18532453XXXX:secret:panther-snowflake-secret-Uk9bBw

Add arn:aws:iam::34318291XXXX:user/k7m2-s-v2st0722 the to Snowflake configuration:

# Snowflake (https://www.snowflake.com/) Integration
Snowflake:
# List of Snowflake cluster IAM ARNs which will ingest the output of Panther log processing.
# To configure Snowflake, after deployment generate a file with the ops tool `snowpipe`.
# This file should be run in your snowflake cluster to configure Snowpipe and declare the Panther tables.
# For example:
# DestinationClusterARNs:
# - arn:aws:iam::34318291XXXX:user/k8m1-s-v2st0721 # test snowflake cluster
# - arn:aws:iam::34318291XXXX:user/h1h4-s-a2st0111 # production snowflake cluster
DestinationClusterARNs:
- arn:aws:iam::34318291XXXX:user/k7m2-s-v2st0722

If you are deploying using a pre-packaged deployment you need to first deploy the master template. After deploying the master template configure the master stack parameters as below and redeploy:

  • update: SnowflakeAPISecretARN parameter as the ARN of the secret created above

  • update: SnowflakeAPISecretKmsResource parameter as the ARN of the secret created above BUT with the service changed to kms from secretsmanager and resource as * (replace the part after the last : with *)

  • update: SnowflakeDestinationClusterARNs parameter as the value of <SNOWFLAKE_IAM_USER> from above.

  • update: LogSubscriptionPrincipals parameter as a comma delimited list of:

    <SNOWFLAKE_IAM_USER>,arn:aws:iam::<panther acount id>:role/panther-snowflake-logprocessing-role-<panther region>

Next, run mage deploy if you are deploying from source or deploy via the pre-packaged deployment using CloudFormation.

When the deployment is done, run the opstool snowpipe. If you deployed from source do:

snowpipe-darwin-amd64 -interactive=false -snowpipe-file snowpipe.sql

If you deployed using the master template do:

snowpipe-darwin-amd64 -interactive=false -master-stack=<name of master stack> -snowpipe-file snowpipe.sql

The compiled Panther opstools can be downloaded from the Github release as an artifact or built from source by doing mage build:tools. The compiled tools follow the naming convention of <tool>-<os>-<cpu>. Pick the tool compiled for your <os> and <cpu>.

In the Snowflake SQL shell use the Load Script option to load snowpipe.sql

Load

Select the All Queries checkbox, then click on Run

Run

Validation of Snowpipe Processing

Once snowpipe.sql has been successfully executed, you should have four databases:

  • panther_logs

  • panther_rule_matches

  • panther_views

  • pather_stored_procedures

These are the same database names used in AWS Athena and queries should behave similarly.

Assuming you have data being regularly being processed, there should be data in the tables in a few minutes.

You can quickly test if the data ingestion is working by running simple queries, for example:

SELECT count(1) AS c FROM panther_logs.public.aws_cloudtrail ;

Update Permissions and Test Panther User Interface with Snowflake

We need to configure the permissions for the Panther AWS secret. Go to the console and select the secret you created above. On the overview screen click on the Edit Permissions button. Copy the below policy JSON, substituting the <snowflake api lambda role> at the top of the generated ./out/snowflake/snowpipe.sql file from above, and <secret ARN> for the ARN of the secret just created.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": "<snowflake api lambda role>" },
"Action": "secretsmanager:GetSecretValue",
"Resource": "<secret ARN>"
}
]
}

Then click the Save button.

The configuration can be tested from the Data Explorer. Run some same queries over a table that you know has data (check via Snowflake console).

After on boarding new data sources to Panther you will need to re-run the snowpipe utility to generate an updated sql script declaring the new tables.

To rotate secrets, create a NEW read-only user and edit the secret replacing the old user and password with the new user and password. Wait one hour before deleting/disabling the the old user in Snowflake.