Logging strategies for security incident response
Effective security incident response depends on adequate logging, as described in the AWS Security Incident Response Guide. If you have the proper logs and the ability to query them, you can respond more rapidly and effectively to security events. If a security event occurs, you can use various log sources to validate what occurred and understand the scope. Then, you can use the results of your analysis to take remediation actions. To learn more about logging best practices, see Configure service and application logging and Analyze logs, findings, and metrics centrally.
<p>In this blog post, we will show you how to achieve an effective strategy for logging for security incident response. We will share logging options across the typical cloud application stack, log analysis options, and sample queries. AWS offers managed services, such as <a href="https://aws.amazon.com/guardduty/" target="_blank" rel="noopener">Amazon GuardDuty</a> for threat detection and <a href="https://aws.amazon.com/detective/" target="_blank" rel="noopener">Amazon Detective</a> for incident analysis. If you want to collect additional logs or perform custom analysis, then you should consider the options described in this blog post.</p>
<h2>Selection of logs</h2>
<p>To select the appropriate logs for security incident response, you should start with the common cloud application stack, which consists of the components and layers of your application deployed on AWS. For each component, we will describe the logging sources that you have. For each log source, we will describe why you should log it for security incident response, how to enable the logs, and what your log storage options are.</p>
<p>To select the logs for security incident response, first consider the following questions:</p>
<ul>
<li>What are your compliance and regulatory requirements for logging?<br><blockquote>
<p><strong>Note:</strong> Make sure that you comply with the log retention requirements of compliance standards relevant to your organization, as well as your organization’s incident response strategy.</p>
</blockquote> </li>
<li>What AWS services do you commonly use?</li>
<li>What AWS services have access to or contain sensitive data?</li>
<li>What threats are most relevant to you?<br><blockquote>
<p><strong>Note:</strong> Performing a threat model of your cloud architectures can help you answer this question. For more information, see <a href="https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/" target="_blank" rel="noopener">How to approach threat modelling</a>.</p>
</blockquote> </li>
</ul>
<p>Considering these questions can help you develop requirements for logging that will guide your selection of the following log sources.</p>
<h2>AWS account logs</h2>
<p>An AWS account is the first, fundamental component of an application deployed on AWS. The account is a container for your AWS resources. You create and manage your AWS resources in this account, and the account provides administrative capabilities for access and billing.</p>
<h3>AWS CloudTrail</h3>
<p>Within an account, each action performed is an API call. From a console sign-in to the deployment of each resource in an <a href="https://aws.amazon.com/cloudformation/" target="_blank" rel="noopener">AWS CloudFormation</a> stack, events are generated to provide transparency on what has occurred in the account. With <a href="https://aws.amazon.com/cloudtrail/" target="_blank" rel="noopener">AWS CloudTrail</a>, you can log, continuously monitor, and retain account activity related to actions across <a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html" target="_blank" rel="noopener">supported AWS services</a>. CloudTrail provides the event history of your account activity, including actions taken through the <a href="https://aws.amazon.com/console/" target="_blank" rel="noopener">AWS Management Console</a>, <a href="https://aws.amazon.com/developer/tools/" target="_blank" rel="noopener">AWS SDKs</a>, command line tools, and other AWS services. CloudTrail logs API calls as three types of events:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-management-events" target="_blank" rel="noopener"><strong>Management events</strong> </a>(also known as <em>control plane operations</em>) show management operations that are performed on resources in your account. This includes actions like creating an <a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener">Amazon Simple Storage Service (Amazon S3)</a> bucket and setting up logging.</li>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-data-events" target="_blank" rel="noopener"><strong>Data events</strong> </a>(also known as <em>data plane operations</em>) show the resource operations performed on or within resources in your account. These operations are often high-volume activities, such as Amazon S3 object-level API activity (for example, <span>GetObject</span>, <span>DeleteObject</span>, and <span>PutObject</span> API operations) and <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener">AWS Lambda</a> function invocation activity.</li>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-insights-events.html" target="_blank" rel="noopener"><strong>Insights events</strong></a> capture unusual API call rate or error rate activity in your account. You must enable these events on a trail in order to capture them, and they are logged to a different folder prefix in the destination S3 bucket for your trail. Insights events provide you with information such as the type of event, the incident time period, the associated API, the error code, and statistics to help you understand and respond effectively to unusual activity.</li>
</ul>
<p>For security investigations, CloudTrail provides context on the creation, modification, and deletion of AWS resources. Therefore, CloudTrail is one of your most important log sources for security incident response in an AWS environment. You have three primary ways to set up CloudTrail:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html" target="_blank" rel="noopener"><strong>CloudTrail Event history</strong></a> — CloudTrail is enabled by default with 90-day retention of management events that you can retrieve through the CloudTrail Event history facility using the console, <a href="https://aws.amazon.com/cli/" target="_blank" rel="noopener">AWS Command line Interface (AWS CLI)</a>, or AWS SDK. You don’t need to take any action to get started using the Event history feature.</li>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html" target="_blank" rel="noopener"><strong>CloudTrail trail</strong></a> — For longer retention and visibility of data events, you need to create a CloudTrail trail and associate it with an S3 bucket and optionally with an <a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener">Amazon CloudWatch</a> log group. If you use <a href="https://aws.amazon.com/organizations/" target="_blank" rel="noopener">AWS Organizations</a>, you can create an organization trail that will log events for each account in the organization. By default, trails are multi-Region, so you don’t need to enable CloudTrail logs in each AWS Region.</li>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html" target="_blank" rel="noopener"><strong>AWS CloudTrail Lake</strong></a> — You can create a CloudTrail lake, which retains CloudTrail logs for up to seven years and provides a SQL-based querying facility. You don’t need to have a trail configured in your account to use CloudTrail Lake.</li>
<li><a href="https://docs.aws.amazon.com/security-lake/latest/userguide/internal-sources.html" target="_blank" rel="noopener"><strong>Amazon Security Lake</strong></a> — You can use Security Lake to ingest CloudTrail events, which include management and data events. You can further analyze these events with <a href="https://aws.amazon.com/quicksight/" target="_blank" rel="noopener">Amazon QuickSight</a> or another other third-party security information and event management (SIEM) tool.</li>
</ul>
<h3>AWS Config</h3>
<p>Creating and modifying resources is an integral part of your account use. Tracking resource configuration changes made by calling the AWS API helps you review changes throughout the resource lifecycle. <a href="https://aws.amazon.com/config/" target="_blank" rel="noopener">AWS Config</a> provides a detailed view of the configuration of AWS resources in your account, examines the resource configurations periodically, and tracks configuration changes that were not initiated by the API. This includes how the resources are related to one another and how they were configured in the past so that you can see how configurations and relationships change over time.</p>
<p>You should enable AWS Config in each Region where you have resources deployed, and you should configure an S3 bucket to receive configuration history and configuration snapshot files, which contain details on the resources that AWS Config records. You can then review configuration compliance and analyze activities performed before, during, and after an event using the configuration history in S3. You should centralize AWS Config resource tracking across multiple accounts in the same organization by <a href="https://aws.amazon.com/blogs/mt/org-aggregator-delegated-admin/" target="_blank" rel="noopener">setting up an aggregator</a>. You can use <a href="https://docs.aws.amazon.com/controltower/latest/userguide/config.html" target="_blank" rel="noopener">AWS Control Tower to automate the setup</a>.</p>
<p>During a security investigation, you might want to understand how a resource configuration has changed over time. For example, you might want to investigate the changes to an S3 bucket policy before and after a security event that involves an S3 bucket. AWS Config provides a configuration history for resources that can help you track activities performed during a security event.</p>
<h2>Operating system and application logs</h2>
<p>To record interactions with applications, you must capture operating system (OS) and application logs, especially custom logs generated by the application development framework. OS and local application logs are relevant for security events that involve an <a href="https://aws.amazon.com/ec2/" target="_blank" rel="noopener">Amazon Elastic Compute Cloud (Amazon EC2)</a> instance. These instances could be standalone, in an auto scaling group behind a load balancer, or compute workloads for <a href="https://aws.amazon.com/ecs/" target="_blank" rel="noopener">Amazon Elastic Container Service (Amazon ECS)</a> or an <a href="https://aws.amazon.com/eks/" target="_blank" rel="noopener">Amazon Elastic Kubernetes Service (Amazon EKS)</a> cluster. OS logs track privileged use, processes, login events, access to directory services, and file system activity on a server. To analyze a potential compromise to an EC2 instance, you will want to review the security event logs for Windows OS and the system logs for Linux-based OS.</p>
<p>With the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html" target="_blank" rel="noopener">unified CloudWatch agent</a>, you can collect metrics and logs from EC2 instances and on-premises servers. The CloudWatch agent aggregates log data into <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" target="_blank" rel="noopener">CloudWatch logs</a>, which can then be exported to Amazon S3 for long-term retention and analyzed with a SIEM tool of your choice or <a href="https://aws.amazon.com/athena/" target="_blank" rel="noopener">Amazon Athena</a>, as shown in Figure 1.</p>
<div id="attachment_28985" class="wp-caption aligncenter">
<img aria-describedby="caption-attachment-28985" src="https://www.infracom.com.sg/wp-content/uploads/2023/04/img1.jpg" alt="Figure 1: Aggregate OS and application logs using CloudWatch Logs" width="760" class="size-full wp-image-28985">
<p id="caption-attachment-28985" class="wp-caption-text">Figure 1: Aggregate OS and application logs using CloudWatch Logs</p>
</div>
<h2>Database logs</h2>
<p>With SQL databases, you can log transactions to help track modifications to the databases, such as additions or deletions. After an engine or system failure, you will need transaction logs to restore a database to a consistent state. Transaction logs are designed to be secure, and they require additional processing to access valuable information. It’s important that you understand data interactions during a security investigation, especially if your databases hold personally identifiable information (PII), financial and payments information, or other information subject to regulatory controls. </p>
<p>When you use <a href="https://aws.amazon.com/rds/" target="_blank" rel="noopener">Amazon Relational Database Service (Amazon RDS)</a>, you can <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.html" target="_blank" rel="noopener">publish database logs to Amazon CloudWatch Logs</a>. For <a href="https://aws.amazon.com/nosql/" target="_blank" rel="noopener">NoSQL</a> databases, tracking atomic interactions is useful. You can find logs for managed NoSQL databases like <a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener">Amazon DynamoDB</a> in CloudTrail. DynamoDB integrates with CloudTrail, providing a record of actions taken by a user, role, or service. These events are classified as data events in CloudTrail.</p>
<h2>Network logs</h2>
<p>The goal of logging network activity is to gain insight into the communications that traverse your network. You might need this data for a variety of reasons, such as network troubleshooting or for use in a forensic investigation of suspected malware activity within your network.</p>
<p>In the AWS Cloud, you can log network activity by creating a proxy that logs network traffic or by using <a href="https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html" target="_blank" rel="noopener">Traffic Mirroring</a> to send a copy of network traffic to a logging server. You can adopt cloud-native approaches to capture this type of data using <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/query-logs.html" target="_blank" rel="noopener">Amazon Route 53 DNS query logs</a> and <a href="https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html" target="_blank" rel="noopener">Amazon VPC Flow Logs</a>.</p>
<p>There are also a variety of third-party networking solutions available like <a href="https://partners.amazonaws.com/partners/001E0000013FeQXIA0/Palo%20Alto%20Networks" target="_blank" rel="noopener">Palo Alto Networks</a> and <a href="https://www.fortinet.com/products/public-cloud-security/aws?did=pa_card&trk=pa_card" target="_blank" rel="noopener">Fortinet</a>, so you can continue to use the network logging mechanisms that you might have used in an on-premises environment.</p>
<h3>Route 53 DNS query logs</h3>
<p>You can configure <a href="https://aws.amazon.com/route53/" target="_blank" rel="noopener">Amazon Route 53</a> to log Domain Name System (DNS) queries. These logs are categorized into two groups:</p>
<ul>
<li>Public DNS query logging</li>
<li>Resolver query logging</li>
</ul>
<p>Logging public DNS queries against domains that you have hosted in Route 53 provides <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/query-logs.html" target="_blank" rel="noopener">query information</a>, such as the domain or subdomain requested, date and time stamp of the request, DNS record type, Route 53 edge location that responded, and response code.</p>
<p>The <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html" target="_blank" rel="noopener">Amazon Route 53 Resolver</a> comes with <a href="https://aws.amazon.com/vpc/" target="_blank" rel="noopener">Amazon Virtual Private Cloud (Amazon VPC)</a> by default. Capturing Resolver query logs provides the same information as public queries, as well as additional information such as the Instance ID of the resource that the query originated from. You can also capture Resolver query logs against <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-query-logs.html" target="_blank" rel="noopener">different types of queries</a>.</p>
<h3>VPC Flow Logs</h3>
<p>You can configure VPC Flow Logs for a VPC in your account to capture traffic that enters and moves around your VPC network, without the addition of instances or products. From these logs, you can review information, such as source and destination IP, ports, timestamps, protocol, account ID, and whether the traffic was accepted or rejected. For a complete list of the fields available for flow log records, see <a href="https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html#flow-logs-fields" target="_blank" rel="noopener">Available fields</a>. You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, IP traffic going to and from each network interface in that subnet or VPC will be logged. For more details on VPC Flow Logs, see <a href="https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html" target="_blank" rel="noopener">Logging IP traffic using VPC Flow Logs</a>.</p>
<p>You can forward flow logs to <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" target="_blank" rel="noopener">Amazon CloudWatch Logs</a> to create CloudWatch alarms based on metric filters. You can also forward flow logs to an S3 bucket for long-term retention and further analysis. Figure 2 demonstrates these configurations.</p>
<div id="attachment_28986" class="wp-caption aligncenter">
<img aria-describedby="caption-attachment-28986" src="https://www.infracom.com.sg/wp-content/uploads/2023/04/img2.jpg" alt="Figure 2: Sending VPC Flow logs to CloudWatch Logs and S3" width="760" class="size-full wp-image-28986">
<p id="caption-attachment-28986" class="wp-caption-text">Figure 2: Sending VPC Flow logs to CloudWatch Logs and S3</p>
</div>
<h2>Access logs</h2>
<p>To identify access patterns for accessible endpoints, especially public endpoints, you should use access logs. Access logs capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. With services built in layers behind a load balancer, unless you track the <span>X-Forwarded-For</span> request header, the requestor’s context is lost. Access logs help bridge that gap during investigations and analysis.</p>
<h3>Amazon S3 server access logs</h3>
<p>Access logs are critical to track object level access when using <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html" target="_blank" rel="noopener">S3 buckets</a> to store confidential or sensitive data. You can also turn on CloudTrail to capture S3 data events. You can store access logs in S3 buckets for long-term storage for compliance purposes and to run analyses during and after an event.</p>
<h3>Load balancing logs</h3>
<p><a href="https://aws.amazon.com/elasticloadbalancing/" target="_blank" rel="noopener">Elastic Load Balancing</a> provides access logs that capture detailed information about requests sent to load balancers. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use this log to analyze traffic patterns and to troubleshoot issues.</p>
<p>Access logs is an optional feature of Elastic Load Balancing that is turned off by default. To enable access logs for load balancers, see <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html" target="_blank" rel="noopener">Access logs for your Application Load Balancer</a>.</p>
<p>If you implement your own reverse proxy for load balancing needs, make sure that you capture the reverse proxy access logs. You can use the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html" target="_blank" rel="noopener">unified CloudWatch agent</a> to forward the logs to CloudWatch. As with OS logs, you can export CloudWatch logs to an S3 bucket for long-term retention and analysis.</p>
<p>If you use an <a href="https://aws.amazon.com/cloudfront/" target="_blank" rel="noopener">Amazon CloudFront</a> distribution as the public endpoint for end users with load balancers as the custom origin, then load balancing access logs will represent the CloudFront distribution as the requestor, rather than the actual end user. If this information doesn’t add value to your incident handling process, then you can use <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html" target="_blank" rel="noopener">CloudFront access logs</a> as the log source that provides end user request details.</p>
<h3>CloudFront access logs</h3>
<p>You should enable standard logs, also known as access logs, when using CloudFront. Specify an S3 bucket where you want CloudFront to save the files.</p>
<p>CloudFront access logs are delivered on a best-effort basis. For information about requests made to a distribution in real time, use <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/real-time-logs.html" target="_blank" rel="noopener">real-time logs</a> that are delivered within seconds of receiving the requests. You should use real-time logs to monitor, analyze, and take action based on content delivery performance. For more details on the fields available from these logs, see the <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#LogFileFormat" target="_blank" rel="noopener">CloudFront standard log file format</a>.</p>
<h3>AWS WAF logs</h3>
<p>When associated with a supported resource like a CloudFront distribution, <a href="https://aws.amazon.com/api-gateway/" target="_blank" rel="noopener">Amazon API Gateway</a> REST API, <a href="https://aws.amazon.com/elasticloadbalancing/application-load-balancer/" target="_blank" rel="noopener">Application Load Balancer</a>, <a href="https://aws.amazon.com/appsync/" target="_blank" rel="noopener">AWS AppSync</a> GraphQL API, <a href="https://aws.amazon.com/cognito/" target="_blank" rel="noopener">Amazon Cognito</a> user pool, or <a href="https://aws.amazon.com/apprunner/" target="_blank" rel="noopener">AWS App Runner</a>, <a href="https://aws.amazon.com/waf/" target="_blank" rel="noopener">AWS WAF</a> can help you monitor HTTP and HTTPS requests that are forwarded to the resource. You should configure web access control lists (ACLs) to gain fine-grained control over the requests, and enable logging for such ACLs to get detailed information about traffic that is analyzed by AWS WAF. Log information includes time of the request being received by AWS WAF from the AWS resource, details about the request, and the AWS WAF rules that the request matched. You can use this log information to monitor access patterns of public endpoints and configure rules to inspect requests in detail. For more information about AWS WAF logging, see <a href="https://docs.aws.amazon.com/waf/latest/developerguide/logging.html" target="_blank" rel="noopener">Logging web ACL traffic</a>.</p>
<h2>Serverless logs</h2>
<p>Serverless computing has become increasingly popular in the cloud-computing space. It provides on-demand compute power in a relatively short burst, meaning that cloud-based instances don’t need to be provisioned and kept around, idle, when there are no tasks to be completed. Although more and more compute tasks are being moved to serverless solutions, the need to log has not changed, but how the logs are generated has. In a serverless environment, security investigations not only benefit from logs that demonstrate the interactions and changes made by the code deployed, but that also document changes to the deployed code itself and access permissions of the Lambda execution role that is granting privileged access.</p>
<h3>AWS Lambda</h3>
<p>The logging of <a href="https://docs.aws.amazon.com/lambda/?id=docs_gateway" target="_blank" rel="noopener">Lambda</a> functions involves two components: how the function itself is operating, and what is happening inside the function (what your code is actually doing).</p>
<p>The logging of a Lambda function itself occurs through data events captured by CloudTrail. As noted earlier in this post, you must configure data events on a trail created in CloudTrail. During configuration, you will need to specify the function from which logs will be captured by your trail, and the destination S3 bucket where they will be stored. These logs contain details on the invocation of the function and help identify the IAM principals that called the Invoke API for Lambda.</p>
<p>AWS Lambda automatically monitors Lambda functions on your behalf and sends logs to CloudWatch. Your Lambda function comes with a CloudWatch Logs log group and a log stream for each instance of your function. The Lambda runtime environment sends details about each invocation to the log stream, and relays logs and other output from your function’s code. For more details on how to monitor Lambda functions, see <a href="https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html" target="_blank" rel="noopener">Accessing Amazon CloudWatch logs for AWS Lambda</a>.</p>
<h2>Log analysis</h2>
<p>For incident response, you need to be able to analyze and query your logs to validate what occurred and to understand the scope.</p>
<p>To begin, you can aggregate logs from various sources in S3 buckets for long-term storage, and you can integrate that data with query tools for further investigation. Logs can be exported and either parsed through directly, or ingested by another tool to help with the analysis. The following are some options that you can use to query these logs:</p>
<ul>
<li><a href="https://aws.amazon.com/athena/" target="_blank" rel="noopener">Amazon Athena</a> — You can directly query CloudTrail events stored in S3 with Athena using SQL commands, specifying the <code>LOCATION</code> of the log files. You would generally use this approach if you have advanced queries to run, and you don’t have a SIEM. To set up Athena to query logs, you can use this <a href="https://github.com/awslabs/aws-security-analytics-bootstrap" target="_blank" rel="noopener">open-source solution</a> from AWS.</li>
<li><a href="https://aws.amazon.com/opensearch-service/" target="_blank" rel="noopener">Amazon OpenSearch Service</a> — OpenSearch is a distributed search and log analytics suite. Because it’s open source, it can ingest logs from more than just AWS log sources. To set this up, you can use this <a href="https://github.com/aws-samples/siem-on-amazon-opensearch-service" target="_blank" rel="noopener">open-source SIEM solution</a> from AWS.</li>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html" target="_blank" rel="noopener">CloudTrail</a> Event History — Either from the console, or programmatically, you can query CloudTrail management events from the last 90-day period. This is ideal for when you have simple queries to make within the last 90 days, and you don’t need stored logs or more complex queries.</li>
<li><a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html" target="_blank" rel="noopener">AWS CloudTrail Lake</a> — Either from the console, or programmatically, you can query stored events in your configured CloudTrail Lake from the time of its configuration, up until the maximum storage duration of 2,557 days (7 years) from the time that you make your query. This approach allows for SQL-based queries, and it is ideal for when you need to make more complex queries against events, but don’t require the additional features of a SIEM solution.</li>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/cloudtrail-search-api-calls/" target="_blank" rel="noopener">Parse through raw JSON using CLI</a> — This is achieved programmatically and parsed through terminal commands. It’s more a legacy method of parsing through logs. You might choose to use this approach for analysis if another service or solution isn’t feasible (for example, if you can’t use the service due to your corporate security policy).</li>
<li>Third-party SIEM — A third-party SIEM might be ideal if you already have a SIEM solution on AWS or elsewhere, and you don’t need a duplicated solution elsewhere. Typically, SIEM solutions will import logs from an S3 bucket and process and index events for analysis. To learn more about SIEM options, see the <a href="https://aws.amazon.com/marketplace/solutions/control-tower/siem" target="_blank" rel="noopener">SIEM solutions in the AWS Marketplace</a>, or the <a href="https://aws.amazon.com/security/partner-solutions/?blog-posts-cards.sort-by=item.additionalFields.createdDate&blog-posts-cards.sort-order=desc&partner-solutions-cards.sort-by=item.additionalFields.partnerNameLower&partner-solutions-cards.sort-order=asc&awsf.partner-solutions-filter-partner-type=*all&awsf.Filter%20Name%3A%20partner-solutions-filter-partner-categories=use-case%23threat-detection-mitigation&awsf.partner-solutions-filter-partner-location=*all&partner-case-studies-cards.sort-by=item.additionalFields.sortDate&partner-case-studies-cards.sort-order=desc&events-master-partner-webinars.sort-by=item.additionalFields.startDateTime&events-master-partner-webinars.sort-order=asc&awsm.page-partner-solutions-cards=2" target="_blank" rel="noopener">AWS Security Competency Partners</a> for a partner local to you with threat detection and incident response (TDIR) expertise.</li>
</ul>
<h2>Sample queries</h2>
<p>In this section, we provide samples of SQL queries. Both Athena and CloudTrail Lake accept SQL queries, but the following samples have been tested for use in Athena only. This is because some samples are for VPC Flow Logs, which you can’t query from CloudTrail Lake. To query CloudTrail logs in Athena, you must first create a table definition that points to the location of your logs stored in S3. You can do this from the CloudTrail Events console by using a hyperlinked suggestion, or from the Athena console directly. Alternatively, for Athena, you can use the <a href="https://github.com/awslabs/aws-security-analytics-bootstrap" target="_blank" rel="noopener">AWS Security Analytics Bootstrap</a>.</p>
<p>For each of these queries, you might need to modify some of the fields, such as the time frame that you are investigating, the IAM entity involved, and the account and Region in scope. For example, you might want to modify the time frame based on the current time and when you believe the security event began. This often involves expanding the time frame after running additional queries and learning more about the scope and timeline.</p>
<p>By using partitions for tables, you can restrict the amount of data scanned by each Athena query, helping to improve performance and reduce cost. For example, you can partition your CloudTrail Athena table <a href="https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table-org-wide-trail" target="_blank" rel="noopener">manually</a> or by using <a href="https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table-partition-projection" target="_blank" rel="noopener">partition projection</a>. You can include the partition column (for example, the timestamp) in your queries to limit the amount of data scanned.</p>
<h3>Unauthorized attempts</h3>
<p>When a security event occurs, you might want to review API calls that were attempted but failed due to the IAM principal not having access to perform the action on that resource. To discover this activity, run the following query (be sure to modify the time window first):</p>
<div class="hide-language">
<pre><code class="lang-text">SELECT *
FROM cloudtrail
WHERE errorcode IN (‘Client.UnauthorizedOperation’,’Client.InvalidPermission.NotFound’,’Client.OperationNotPermitted’,’AccessDenied’)
AND useridentity.arn LIKE ‘%iam%’
AND eventtime >= ‘2023-01-01T00:00:00Z’
AND eventtime < ‘2023-03-01T00:00:00Z’
ORDER BY eventtime desc
<p>This sample query can help you identify whether certain IAM principals have a significant amount of unauthorized API calls, which can indicate that an IAM principal is compromised.</p>
<h3>Rejected TCP connections</h3>
<p>During a security event, the unauthorized user that is interacting with the resources in your account is probably trying to establish persistence through the network layer. To get a list of rejected TCP connections and extract from it the day that these events occurred, run the following query:</p>
<div class="hide-language">
<pre><code class="lang-text">SELECT day_of_week(date) AS
day,date,interface_id,srcaddr,action,protocol
FROM vpc_flow_logs
WHERE action = ‘REJECT’ AND protocol = 6
LIMIT 100;
Connections over older TLS versions
You might want to see how many calls to AWS APIs were made using older versions of the TLS protocol, as part of a forensic follow-up or a discovery job after a risk analysis. You can get this data by querying CloudTrail logs.
Filter connections from an IP
With an IP address that you’d like to investigate, as a part of your forensic analysis, you might want to see the connections made to resources in a VPC. You can obtain this information by querying VPC Flow Logs. As with the server access logs, if you’re using Athena, you will first need to create a new table.
Investigate user actions
If you have identified a user who has been compromised, or that you suspect has been compromised, you might want to know the API calls that they made over a specific time period. Understanding the activity of a user can help you understand the scope of impact during an incident, as well as the reach of user permissions when you design your access management strategy.
Conclusion
It is essential that you capture logs from various layers within your application architecture, so that you can effectively respond to a security event at various layers of the application stack. If a security event occurs, logs can help provide a clear picture of what happened and the scope of the affected resources. This post helps you build a logging strategy for security incident response by understanding what logs you want to analyze, where you want to store those logs, and how you will analyze them.
Further resources
<p>If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the <a href="https://repost.aws/topics/TAEEfW2o7QS4SOLeZqACq9jA/security-identity-compliance" rel="noopener" target="_blank">AWS Security, Identity, & Compliance re:Post</a> or <a href="https://console.aws.amazon.com/support/home" rel="noopener" target="_blank">contact AWS Support</a>.</p>
<p><strong>Want more AWS Security news? Follow us on <a title="Twitter" href="https://twitter.com/AWSsecurityinfo" target="_blank" rel="noopener noreferrer">Twitter</a>.</strong></p>
<!-- '"` -->