Data Explorer
Data explorer gives Panther deployments access to a panel where they can perform SQL queries on their Panther data. It adds a clean UI powered by a robust backend, with persistence and autocomplete goodies that will enhance your querying experience.
With Panther Data Explorer, you are able to:
    Browse collected log data, rule matches, and search standard fields across all data
    Write SQL queries on normalized data with autocompletion on every step of the way
    Save, tag and load your queries
    Create scheduled queries to run through the rule engine
    View query results in a friendly way
    Share results with their team through a shareable link
    Select entire rows as JSON to use in the rule engine as unit tests
    Download results in a CSV
As with all of our enterprise features, access to the Data Explorer can be limited through our Role-Based Access Control system.

Macros

All the tables in our supported backend databases (Athena and Snowflake) are partitioned by event time to allow for better performance when querying using a time filter. However, the partition technologies are different across the backends, and using them efficiently can make for unwieldy SQL code if written by hand. To make things even easier, we have created a number of macros that will be expanded into full expressions when sent to the database, so that you don't have to worry about writing complicated expressions.
Note: that the macro expansions will only work from the Panther UI, i.e. will not work if you are using the native Database web UI.

Time range filter: p_occurs_between

p_occurs_between(startTime, endTime [, tableAlias])
startTime - correctly formatted time expression, indicating start of search window
endTime - correctly formatted time expression, indicating end of search window
tableAlias - optional parameter, allows passing through a table alias to the filter
Note: Please ensure that your time expression can be parsed by the database backend your team is using. Some expressions that work in Snowflake (i.e. 2021-01-21T11:15:54.346Z) will not be accepted as valid timestamps by Athena. The default safe time format should probably look similar to this 2021-01-02 15:04:05.000 and is assumed to be in the UTC time zone.
The macro p_occurs_between() takes a start time, an end time and optionally a table alias (in case you want to use the macro across one or multiple tables in a join) and filters the result set to those events in the time range, using the correct partition (minimizing I/O and speeding up the query).
The following Snowflake command contains a macro:
1
select p_db_name, count(*) as freq from panther_views.public.all_databases
2
where p_occurs_between(current_date - 1, current_timestamp)
3
group by p_db_name
4
limit 1000
Copied!
The macro that will be automatically expanded before the query is sent to the database. The form the expansion takes is database specific. In Snowflake, this expansion is pretty straightforward:
1
select p_db_name, count(*) as freq from panther_views.public.all_databases
2
where p_event_time between convert_timezone('UTC',current_date - 1)::timestamp_ntz
3
and convert_timezone('UTC',current_timestamp)::timestamp_ntz
4
group by p_db_name
5
limit 1000
Copied!
Keep in mind that different database back-ends allow different date formats and operations. Athena does not allow simple arithmetic operations on dates, therefore the care must be taken to use an Athena-friendly time format:
1
select p_db_name, count(*) as freq from panther_views.public.all_databases
2
where p_occurs_between(current_date - interval '1' day, current_timestamp)
3
group by p_db_name
4
limit 1000
Copied!
Because of the structure of allowed indexes on partitions in Athena, the expansion looks different:
1
select p_db_name, count(*) as freq from panther_views.all_databases
2
where p_event_time between cast (current_date - interval '1' day as timestamp) and cast (current_timestamp as timestamp)
3
and partition_time between to_unixtime(date_trunc('HOUR', (cast (current_date - interval '1' day as timestamp))))
4
and to_unixtime(cast (current_timestamp as timestamp))
5
group by p_db_name
6
limit 1000
Copied!

Time offset from present: p_occurs_since

p_occurs_since(offsetFromPresent [, tableAlias])
offsetFromPresent - positive integer representing the number of seconds relative to the present to scan back to, or time expression comprising a positive integer and a time part (seconds, minutes, hours, days, weeks)
tableAlias - optional parameter, allows passing through a table alias to the filter
The macro p_occurs_since() takes a positive integer number of seconds and optionally a table alias (in case you want to use the macro across one or multiple tables in a join), and filters the result set down to those events from the current time offset by the specified number of seconds, using the correct partition or cluster key (minimizing I/O and speeding up the query).
For convenience, we have added additional time parts that can be used:
    s, sec, second, seconds — macro adds specified seconds to offset
    m, min, minute, minutes — macro adds specified minutes to offset
    h, hr, hrs, hour, hours — macro adds specified hours to offset
    d, day, days — macro adds specified days to offset
    w, wk, week, weeks — macro adds specified number of weeks to offset
    if no suffix is detected, the macro will proceed as before and default to seconds

Examples:

1
p_occurs_since('6 d')
2
p_occurs_since('2 weeks')
3
p_occurs_since(900) // assumes seconds
4
p_occurs_since('96 hrs')
Copied!
Note: If this is used in a scheduled query, then rather than using the current time as the reference, the scheduled run time will be used. For example, if a query is scheduled to run at the start of each hour, then the p_occurs_since('1 hour') macro will expand using a time range of 1 hour starting at the start of each hour (regardless of when the query is actually executed).
In the following example of a macro with a table alias parameter, we look at Cloudtrail logs to identify S3 buckets created and deleted within one hour of their creation, a potentially suspicious behaviour. To get this information we do a self-join on the aws_cloudtrail table in panther_logs, and we use a macro expansion to limit this search to the past 24 hours on each of the two elements of the self-join (aliased ct1 and ct2 below):
1
select
2
ct1.p_event_time createTime, ct2.p_event_time deleteTime,
3
timediff('s',createTime, deleteTime) timeExtant,
4
ct1.requestparameters:"bucketName"::varchar createdBucket,
5
ct1.useridentity:"arn"::varchar createArn, deleteArn,
6
ct1.useragent createUserAgent, deleteUserAgent
7
from panther_logs.public.aws_cloudtrail ct1
8
join (
9
select p_event_time, requestparameters:"bucketName"::varchar deletedBucket, errorcode,
10
eventname deleteEvent, useridentity:"arn"::varchar deleteArn, useragent deleteUserAgent from panther_logs.public.aws_cloudtrail) ct2
11
on (ct1.requestparameters:"bucketName"::varchar = ct2.DeletedBucket
12
and ct2.p_event_time > ct1.p_event_Time
13
and timediff('s',ct1.p_event_time, ct2.p_event_time) < 3600)
14
where ct2.deleteEvent = 'DeleteBucket'
15
and ct1.eventName = 'CreateBucket'
16
and ct1.errorCode is null and ct2.errorcode is null
17
and p_occurs_since('1 day',ct2) -- apply to ct2
18
and p_occurs_since('24 hours',ct1) -- apply to ct1
19
order by createdBucket, createTime;
Copied!
There are two separate calls to p_occurs_since each applied to a different table, as indicated by the table alias used as a second parameter. This is expanded into the following Snowflake query:
1
select
2
ct1.p_event_time createTime, ct2.p_event_time deleteTime,
3
timediff('s',createTime, deleteTime) timeExtant,
4
ct1.requestparameters:"bucketName"::varchar createdBucket,
5
ct1.useridentity:"arn"::varchar createArn, deleteArn,
6
ct1.useragent createUserAgent, deleteUserAgent
7
from panther_logs.public.aws_cloudtrail ct1
8
join (
9
select p_event_time, requestparameters:"bucketName"::varchar deletedBucket, errorcode,
10
eventname deleteEvent, useridentity:"arn"::varchar deleteArn, useragent deleteUserAgent from panther_logs.public.aws_cloudtrail) ct2
11
on (ct1.requestparameters:"bucketName"::varchar = ct2.deletedBucket
12
and ct2.p_event_time > ct1.p_event_Time
13
and timediff('s',ct1.p_event_time, ct2.p_event_time) < 3600)
14
where ct2.deleteEvent = 'DeleteBucket'
15
and ct1.eventName = 'CreateBucket'
16
and ct1.errorCode is null and ct2.errorcode is null
17
and ct2.p_event_time >= current_timestamp - interval '86400 second'
18
and ct1.p_event_time >= current_timestamp - interval '86400 second'
19
order by createdBucket, createTime;
Copied!
Last modified 18d ago