Gain full control over your Cisco ASA (Adaptive Security Appliance) data streams with Splunk’s Data Management Pipeline Builders
Whether you’re filtering out low-priority firewall logs or reducing noisy events, Splunk’s Edge Processor (customer-hosted) and Ingest Processor (Splunk-hosted Saas) let you filter, transform, and optimize Cisco ASA logs before routing to Splunk platform of Amazon S3 for low-cost storage.
With just a few clicks, you can apply pre-built templates to:
Get started quickly with out-of-the-box templates and preview the results before applying any changes with no custom code required.
How to use the Cisco ASA template
The Cisco ASA Pipeline Template is a pre-built SPL2-based logic that helps you clean, filter, and route your Cisco ASA logs, before they even reach your Splunk index.
Note: This pipeline template can be applied in both Edge Processor and Ingest Processor. Unless you already have Edge Processor configured, we recommend using Ingest Processor to avoid additional configuration steps.
Here’s how you can get started:
Step-by-Step Instructions
Log in to your Splunk Cloud Platform and navigate to Settings → Add Data → Data Management Experience.
From your Data Management homepage, select Pipelines → Templates, then search for Cisco ASA log reduction.
Click Create Pipeline, select the Ingest or Edge Processor option, and apply the Cisco ASA template. This gives you a ready-to-use pipeline with logic that:
Use live data snapshots or sample logs to preview what the pipeline will do. You’ll see exactly which logs are kept and which are dropped.
Give your pipeline a name (like cisco_asa_filter_splunk) and apply it. From that point forward, incoming Cisco ASA logs will be filtered and stored exactly as configured.
Cisco ASA Log Reduction Pipeline
You can use this SPL2 code to customize your pipeline template as you see fit.
import 'cisco_msg_id.csv' from /envs.splunk.'eps-shw-522513dc5758f0'.lookups
import route from /splunk/ingest/commands
import logs_to_metrics from /splunk/ingest/commands
function extract_useful_fields($source) {
return | from $source
/* Extracted message matches with ASA or FTD */
| rex field=_raw /(?P<_raw>(%ASA|%FTD).*)/
/* Extract message number */
| rex field=_raw /(%ASA|%FTD)-\d+-(?P\d+)/
/* Extract username */
| rex field=_raw /^[^'\n]*'(?P[^']+)/}
function drop_security_noise($source) {
return | from $source
| where message_id != "302013"
| where message_id != "302015"
| where message_id != "302016"
| where message_id != "110003"
| where message_id != "110002"
}
function mask_usernames($source) {
return | from $source
| eval _raw=replace(_raw, username, "[NAME_REDACTED]")
}
function enrich_with_explanation($source) {
return | from $source |
lookup 'cisco_msg_id.csv' message_id AS message_id OUTPUT explanation AS explanation
}
$pipeline = | from $source
// extract the useful fields
| extract_useful_fields
// Filter "302013", "302015" message ID number
| drop_security_noise
// enrich log events with explanations based on message ID
| enrich_with_explanation
// convert logs to metrics and send to o11y cloud
| thru [
| logs_to_metrics name="cisco_asa" metrictype="counter" value=1 time=_time dimensions={"message_id": message_id}
| into $metrics_destination
]
// send authentication logs to Splunk index
| route message_id == 109025, [
// Mask usernames to protect PII
| mask_usernames
| fields -username
| eval index = "cisco_auth_logs"
| into $splunk_destination
]
// // Archive rest of the logs to AWS S3
| into $aws_s3_destination;