Serviceprefix groups log types based on the service that produced them (ie
Syslog). All log types for a
Serviceare grouped under a go module in the parsers module. For example all
AWS.*logs live under the
.gofile inside its module. For example,
AWS.CloudTrailis defined in
AWS.*logtypes exist in the
awslogsmodule. That module exports a
LogTypes() logtypes.Groupfunction to declare all log types it contains. Panther knows to include all exported log types inf modules under
FooService.Eventis a log event produced by
FooServiceand describes the result of a user request. An example
FooService.Eventlog line is:
Foo.Eventand produce a Panther log event that will be stored in our storage backend to be processed by the rules engine and queried in the security data lake.
logtestingpackage with helpers for testing log parsers. This module allows to run tests declared in YAML files using the
logtesting.RunTestsFromYAMLfunction. These tests verify that the parser produces the expected log event(s) for a log entry. This method ensures we haven't missed any fields or indicator values throughout the process.
Foo.Eventin a YAML file at
resultJSON includes all the panther-fields so we can verify that the log processing is correct. For testing purposes
p_row_idare omitted since they would vary on each run of the test. The helper only verifies that these fields are non-empty and of valid format in the parsed result.
foologs/foologs_test.go. The test will read tests from a YAML file and run them.
Foo.Eventas a Go struct by:
pantherlogmodule. These types handle
nullvalues and missing JSON fields by omitting them in the output. Empty strings (
"") and zero numeric values are never omitted in order to preserve as much as possible from the original log event.
descriptionto document the contents of this field. These documentation strings will be used to generate user documentation from this code. The
jsontag must use the exact name used in the logs. Panther automatically adds
omitemptyto all fields.
panther:"SCANNER"to define indicator fields. Note that a scanner can produce multiple indicator fields from a single value, or a different indicator field based on the value. Panther defines the following scanners
p_any_ip_addressesindicator if the value is a valid IP address
p_any_ip_addressesindicator if the value is a valid IP address otherwise it adds a
p_any_ip_addressesindicator using the hostname part of the URL
p_any_ip_addressesindicator by splitting a
pantherlog.Uint16. This will limit the storage requirements for the columns. If you are unsure about the range limits use
nullcase by omitting the field when encoding to JSON.
nullor missing in the log input.
Foo.Eventwe need to provide a way for panther to parse log input into a panther log event. To achieve this we need to provide a type implementing
nilslice and an error.
pantherlog.Resultvalues so the log processor can store them.
pantherstruct tag will only be processed when the
Resultis encoded to JSON. This is deliberate in order to be able to support both JSON and text-based log types. In the log processor pipeline this happens in the final stage when the result is written to a buffer that will be uploaded to S3.
Attention This means that
EventTimewill only be set on the
Resultwhen it is encoded to JSON.
EventTimerinstances takes precedence over event timestamps defined with struct tags.
Foo.Eventlog type to them. Panther keeps track of supported log types using a registry of log types.
logtypes.Config, a struct including all the information required to build a
logtypes.Entryfor any log type. The configuration for
Foo.Eventwould look like this:
logtypes.Entrywe need to use its
BuildEntry() (logtypes.Entry, error)function
logtypes.MustBuildso we are informed of any errors at compile time.
logtype.Entryvalues declared in a package group them together in a
logtypes.Groupwhich will provide a read-only view of our
logtypes.Entrycollection. Note that each group has a name to describe the purpose of the grouping. We choose "Foo" for naming our group.
LogTypes() logtypes.Groupfunction so Panther knows to register and use them at runtime.
logtypes.ConfigJSONthat automates the definition of the parser. In practice the code required to define such a log type would be much less as seen in the TL;DR section below.
panther-bootstrap-auditlogs-<id>bucket to drive log processing or use the development tool
./out/bin/devtools/<os>/<arch>/logprocessorto read files from the local file system.