Panther is a collection of serverless applications deployed within your AWS account. The frontend is a React application which runs in a Docker container (via ECS), and the backend is a collection of compute (Lambda), storage (DynamoDB / S3), and other supporting services.

The sections below cover how Panther works under the hood, how to build and deploy from source, and how to extend Panther to meet your individual needs.

Architecture Diagram

This diagram provides an overview of the core components of Panther, and how they are connected.

High level architecture diagram

For a more detailed architecture diagram, see the bottom of this page.


To deploy from source, install Docker and make sure the daemon is running in the background.

For the remaining dependencies, you can either use our development image or install development dependencies locally.

Development Image

This is the easier option, but will also lead to much slower builds.

Simply export your AWS credentials as environment variables, and then run ./ From here, run mage setup and you're good to go.

Local Dependencies

To install dependencies locally (recommended for regular contributors):

For example, on MacOS w/ Homebrew:

brew install go node@14 python3
export PATH=$HOME/go/bin:$PATH
go get
go get

Then install the remaining development libraries:

mage setup

You'll need to run mage setup every time the dev libraries are updated. If you run into issues, try mage clean setup to reset your repo and re-install all dependencies.


Panther uses mage, a Go tool similar to make , to manage the development lifecycle.

Run mage from the repo root to see the list of available commands:

build:lambda Compile Go Lambda function source
build:tools Compile devtools and opstools
clean Remove dev libraries and build/test artifacts
deploy Deploy Panther to your AWS account
doc Preview auto-generated documentation in out/doc
fmt Format source files
gen Autogenerate API-related source files and CloudWatch dashboards
master:deploy Deploy single master template nesting all other stacks
master:publish Publish a new Panther release (Panther team only)
setup Install build and development dependencies
teardown Destroy Panther infrastructure
test:cfn Lint CloudFormation and Terraform templates
test:ci Run all required checks for a pull request
test:go Test and lint Go source
test:integration Run integration tests against a live deployment
test:python Test and lint Python source
test:web Test and lint web source

You can easily chain mage commands together, for example: mage clean setup test:ci deploy

Repo Layout

Since the majority of Panther is written in Go, the repo follows the standard Go project layout:




Input/output models for communicating with Panther's backend APIs


Dockerfiles for CI and deployment


Go dev and ops tools


CloudFormation templates for deploying Panther itself or integrating the accounts you want to scan


Documentation, license headers, README, images, code of conduct, etc


Source code for all of Panther's Lambda functions


Standalone Go libraries that could be directly imported by other projects


Magefile source and other build infrastructure


Source for the Panther web application


Run our test suite: mage test:ci

Run integration tests against a live deployment: mage test:integration

  • To run tests for only one package: PKG=./internal/compliance/compliance-api/main mage test:integration

Integration tests will erase all Panther data stores


AWS Credentials

Configure your AWS credentials and deployment region:

export AWS_REGION=us-east-1 # Any supported region
export AWS_ACCESS_KEY_ID=...

Remember to follow best security practices when handling access keys:

  • Avoid storing them in plaintext files

  • Use IAM roles with temporary session credentials

  • Rotate access keys every 90 days

  • Enforce MFA for key access

Tools like aws-vault can help with all the above, check out our blog post to learn more!

Mage Deploy

Now you can run mage deploy

  • If you're using the development image, be sure to export your AWS credentials in the environment before running ./

  • If your credentials timeout, you can safely redeploy to pick up where you left off.

  • If you use aws-vault, you must be authenticated with MFA. Otherwise, IAM role creation will fail with InvalidClientTokenId

  • You can also update a single stack in an existing deployment: STACK=appsync mage deploy

  • Or, you can update the source code for a single Lambda function: LAMBDA=log-processor mage deploy

This will deploy the main CloudFormation stacks independently and is optimized for development. If instead you want to deploy the single master template: mage master:deploy

Panther relies on a number of custom CloudFormation resources. Like any resource, these will not be updated unless the input parameters have changed. You can force an update of most custom resources by overriding their version: CUSTOM_RESOURCE_VERSION=v1.5.0 mage deploy

From an EC2 Instance

You can also deploy from an EC2 instance with Docker and git installed in the same region you're deploying Panther to. Instead of exporting your AWS credentials as environment variables, you will need to attach the deployment IAM role to your EC2 instance profile. We recommend at least an m5.large instance type, but even one as small as t2.small should be sufficient.


Run mage teardown to remove all Panther infrastructure, including S3 buckets that are normally retained if just the CloudFormation stack were deleted. If you have a single top-level Panther stack, then STACK=your-stack-name mage teardown

In summary, teardown will:

  1. Delete all Panther stacks

  2. Delete all S3 buckets tagged with Application:Panther and Stack:panther-bootstrap

    • If there are too many items to delete manually, an expiration policy is set instead

  3. Delete all CloudWatch log groups prefixed with /aws/lambda/panther-

    • Be careful running teardown if you have your own "panther" service which might be writing log groups with that prefix

Additional Diagrams

The diagrams below can be used to understand Panther's architecture at a deeper level and provide insight into data flows.

Detailed Architecture Diagram

This diagram provides additional detail to the high-level diagram above:

Architecture diagram

While more detailed than the overview above, this diagram also simplifies some implementation details for clarity. For example, the majority of lambdas are not invoking each other directly but instead communicating via SQS Queues or DynamoDB streams.

Data Flow Diagram

This diagram shows where and how your data is stored and processed:

Data flow diagram

The above arrows indicate the direction in which data is transferred, as opposed to the previous diagrams where arrows are indicating the direction that communication is being initiated.