Log drains
Log drains are currently in beta, and minor details may change. We'd love to hear any feedback or
requests at hello@airplane.dev.
Log drains allow you to automatically stream all audit logs for your team to
one or more destinations in your organization's observability stack.
To configure log drains, click on "Log drains" on the left side of the "Team settings" page. From
there, you can set up one or more destination types, described in more detail below.
Audit log structure
Audit log structure
The following example shows how audit logs are structured for export to log drains:
javascriptCopied1{2id: "aud20230509a4haxz",3createdAt: "2023-05-09T23:14:33Z",4actorID: "usr20230509hd1hfg",5actorEmail: "a.user@airplane.dev",6targetID: "run202305093jasjk",7context: {8ipAddress: "1.2.3.4",9userAgent: "Mozilla/5.0 (X11; Linux x86_64) ..."10},11event: {12type: "run.finished",13payload: {14runFinished: {15envID: "env20220314jasd",16envSlug: "prod",17runDurationMs: 5241,18runSource: "form",19runStatus: "Succeeded",20taskID: "tsk20221104sj4ja",21taskName: "My task",22taskSlug: "my_task"23},24},25}26};
Note that the structure of the
payload
field will vary based on the event type.Destination types
Destination types
Datadog
Datadog
If the Datadog log drain is enabled, each audit log will be sent to the Datadog log collection API
using the configured API key. These logs will be indexed in Datadog with the source name
airplane-audit-logs
and service name airplane
.OpenTelemetry
OpenTelemetry
If an OpenTelemetry log drain is enabled, each audit log will be sent to an
OpenTelemetry collector in your team's infrastructure
using the configured URL. The collector can then filter and transform the logs before forwarding
them onto other destinations in your observability stack, including Datadog, AWS CloudWatch, Splunk,
etc.
Webhook
Webhook
If a webhook log drain is enabled, each audit log will be sent as an HTTP POST request to the
configured webhook URL. The body will consist of a single audit log in JSON format.
Errors and retries
Errors and retries
Any non-200 HTTP response from a downstream destination will be considered an error and the
associated request will be retried later. After 5 consecutive delivery errors, the corresponding
event will be dropped from the log drain export pipeline and not sent again. However, it will still
be visible in the Activity page and exportable via CSV.
Troubleshooting
Troubleshooting
If logs aren't reaching your team's configured drain(s), then please contact
support@airplane.dev for assistance.
Creating Datadog dashboards from log drain data
Creating Datadog dashboards from log drain data
If you're sending log drain data to Datadog, you can use the log events to create dashboards and
alerts around your organization's Airplane activity.

To create an "Airplane runs" dashboard like the one above:
-
Ensure that log drain data is correctly arriving in your Datadog account by searching for
source:airplane-audit-logs
in the Logs search page -
Download our Datadog dashboard JSON template and save it in an accessible place
-
Create facets for the following attribute paths by clicking the "+ Add" button in the left panel of the Logs search page:
@data.event.payload.runCreated.runSource
@data.event.payload.runCreated.taskSlug
@data.event.payload.runFinished.runStatus
@data.event.payload.runFinished.taskSlug
@data.event.type

- Create a facet measure for the following attribute (by selecting "Measure" instead of "Facet" in
the facet creation panel):
@data.event.payload.runFinished.runDurationMs
- Create a new, empty dashboard
- Click "Configure" then "Import dashboard JSON..." and select the file downloaded in step (2)
After following these steps, you should see your dashboard populated with your log data within a few
minutes.
Once the initial dashboard is in place, you can extend it by adding facets and plots for other types
of log drain events (e.g. task updates).