Forensic investigation environment strategies within the AWS Cloud

When a deviation from your own secure baseline occurs, it’s imperative to respond and resolve the problem quickly and follow-up with a forensic investigation and real cause analysis. Having a preconfigured infrastructure and a practiced arrange for deploying it when there’s a deviation from your own baseline will help you extract and analyze the info had a need to determine the impact, scope, and real cause of an incident and go back to operations confidently.

Time is of the essence in understanding the what, how, who, where, so when of a security incident. You hear of automated incident response often, which includes auditable and repeatable processes to standardize the resolution of incidents and accelerate evidence artifact gathering.


Similarly, having a typical, pristine, pre-configured, and repeatable forensic clean-room environment that may be automatically deployed by way of a template allows your company to reduce human interaction, keep carefully the larger organization separate from contamination, hasten evidence gathering and real cause analysis, and protect forensic data integrity. The forensic analysis process assists in data preservation, acquisition, and analysis to recognize the primary cause of an incident. This process may also facilitate the transfer or presentation of evidence to outside legal entities or auditors. AWS CloudFormation templates-or other infrastructure as code (IaC) provisioning tools-help one to achieve these goals, providing your organization with consistent, well-structured, and auditable results that enable an improved overall security posture. Having these environments as a permanent section of your infrastructure allows them to be well tested and documented, and provides you opportunities to teach your teams within their use.

This post provides strategies which you can use to prepare your company to react to secure baseline deviations. The proper execution is taken by these strategies of guidelines around Amazon Web Services (AWS) account structure, AWS Organizations organizational units (OUs) and service control policies (SCPs), forensic Amazon Virtual Private Cloud (Amazon VPC) and network infrastructure, evidence artifacts to be collected, AWS services to be utilized, forensic analysis tool infrastructure, and user authorization and usage of the above. The specific focus would be to offer an environment where Amazon Elastic Compute Cloud (Amazon EC2) instances with forensic tooling may be used to examine evidence artifacts.

This post presumes that you curently have an evidence artifact collection procedure or that you will be implementing one and that the data can be used in the accounts described here. If you’re looking for suggestions about how exactly to automate artifact collection, see How exactly to automate forensic disk collection for guidance.

Infrastructure overview

A well-architected multi-account AWS environment is dependant on the structure supplied by Organizations. As companies grow and have to scale their infrastructure with multiple accounts, in multiple AWS Regions often, Organizations offers programmatic creation of new AWS accounts coupled with central management and governance that helps them to take action in a controlled and standardized manner. This programmatic, centralized approach ought to be used to generate the forensic investigation environments described in the strategy in this website post.

The example in this website post runs on the simplified structure with separate dedicated OUs and makes up about security and forensics, shown in Figure 1. Your organization’s architecture varies, however the strategy remains exactly the same.

Note: There could be known reasons for forensic analysis to be performed live within the compromised account itself, such as for example in order to avoid shutting or accessing the compromised instance or resource down; however, that approach isn’t covered here.

Figure 1: AWS Organizations forensics OU example

Figure 1: AWS Organizations forensics OU example

The main components in Figure 1 are:

  • A security OU, which is useful for hosting security-related services and access. The security OU and the associated AWS accounts ought to be managed and owned by your security organization.
  • A forensics OU, that ought to be considered a separate entity, even though some similarities could be had because of it and crossover responsibilities with the security OU. There are several known reasons for having it inside a separate account and OU. A number of the more important reasons are that the forensics team may be a different team compared to the security team (or perhaps a subset of it), certain investigations could be under legal hold with additional access restrictions, or a known person in the security team may be the focus of a study.

When talking about Organizations, accounts, and the permissions necessary for various actions, you need to look at SCPs first, a core functionality of Organizations. SCPs offer control on the maximum available permissions for several accounts in your company. In the example in this website post, you should use SCPs to supply identical or similar permission policies to all or any the accounts beneath the forensics OU, which is used as a resource container. This policy overrides all the policies, and is really a crucial mechanism to make sure that it is possible to deny or allow any API calls desired explicitly. Some use cases of SCPs are to restrict the capability to disable AWS CloudTrail, restrict root user access, and make sure that all actions used the forensic investigation account are logged. This gives a centralized means of avoiding changing individual policies for users, groups, or roles. Accessing the forensic environment ought to be done utilizing a least-privilege model, with nobody with the capacity of modifying or compromising the collected evidence initially. For a study environment, denying all actions except those you intend to list as exceptions may be the most straightforward approach. Focus on the default of denying all, and work the right path towards minimal authorizations had a need to perform the forensic processes established by your company. AWS Config could be a valuable tool to track the noticeable changes designed to the account and offer proof these changes.

Take into account that the restrictive SCP is applied once, even the main account or people that have administrator access won’t have admission beyond those permissions; therefore, frequent, proactive testing as your environment changes is really a practice best. Also, make sure to validate which principals can take away the protective policy, if required, to transfer the account to another entity. Finally, create the surroundings prior to the restrictive permissions are applied, and move the account beneath the forensic OU then.

Having another AWS account focused on forensic investigations is most beneficial to help keep your larger organization separate from the possible risk of contamination from the incident itself, ensure the protection and isolation of the integrity of the artifacts being analyzed, and keeping the investigation confidential. Separate accounts also avoid situations where in fact the threat actors may have used all of the resources immediately open to your compromised AWS account by hitting service quotas therefore preventing you from instantiating an Amazon EC2 instance to execute investigations.

Having a forensic investigation account per Region is an excellent practice also, since it keeps the investigative capabilities to the info being analyzed close, reduces latency, and avoids issues of the info changing regulatory jurisdictions. For instance, data surviving in the EU might need to be examined by an investigative team in THE UNITED STATES, however the data itself can’t be moved because its UNITED STATES architecture doesn’t align with GDPR compliance. For global customers, forensics teams could be located in different locations worldwide and also have different processes. It’s easier to have a forensic account in your community where an incident arose. The account all together may possibly also then be provided to local legal third-party or institutions auditors if required. That said, if your AWS infrastructure is contained within Regions only in a single country or jurisdiction, a single re-creatable account in a single Region with evidence artifacts shared from and kept within their respective Regions could possibly be an easier architecture to control as time passes.

A merchant account created within an automated fashion utilizing a CloudFormation template-or other IaC methods-allows one to minimize human interaction before use by recreating a completely new and untouched forensic analysis instance for every separate investigation, ensuring its integrity. Individual people shall only get access within a security incident response plan, and then even, permissions to change the surroundings ought to be minimal or none at all. The post-investigation environment will be either preserved in a locked state or removed then, and a brand new, blank one created in its place for the next investigation without trace of the prior artifacts. Templating your environment facilitates testing to make sure your investigative strategy also, permissions, and tooling shall work as intended.

Accessing your forensics infrastructure

Once you’ve defined where your investigative environment should reside, you need to think about who’ll be accessing it, how they shall achieve this, and what permissions they shall need.

The forensic investigation team could be a separate team from the security incident response team, exactly the same team, or perhaps a subset. You need to provide precise access rights to the band of individuals performing the investigation within maintaining least privilege.

You need to create specific roles for the many needs of the forensic procedures, each with only the permissions required. Much like SCPs along with other situations here described, focus on no permissions and add authorizations only as required while testing and establishing your templated environments. As an example, you may create the next roles within the forensic account:

Responder – acquire evidence

Investigator – analyze evidence

Data custodian – manage (copy, move, delete, and expire) evidence

Analyst – access forensics reports for analytics, trends, and forecasting (threat intelligence)

You need to establish an access process of each role, you need to include it in the response plan playbook. This can assist you to ensure least privilege access in addition to environment integrity. For instance, set up a process for an owner of the Security Incident Response Plan to verify and approve the obtain access to the surroundings. Another alternative may be the two-person rule. Alert on log-in can be an additional security measure you could add to assist in confidence in the environment’s integrity, also to monitor for unauthorized access.

You need the investigative role to possess read-only access to the initial evidence artifacts collected, consisting of &lt generally;a href=”https://aws.amazon.com/ebs/” target=”_blank” rel=”noopener noreferrer”>Amazon Elastic Block Store (Amazon EBS) snapshots, memory dumps, logs, or other artifacts within an Amazon Simple Storage Service (Amazon S3) bucket. The initial sources of evidence ought to be protected; MFA delete and S3 versioning are two options for doing so. Work ought to be performed on copies of copies if rendering the initial immutable isn’t possible, if any modification of the artifact may happen especially. That is discussed in further detail below.

Evidence should only be accessible from the roles that want access-that is absolutely, data and investigator custodian. To greatly help prevent potential insider threat actors from being conscious of the investigation, you need to deny even read access from any roles not designed to access and analyze evidence.

Protecting the integrity of one’s forensic infrastructures

Once you’ve built the business, account structure, and roles, you need to decide on the very best strategy in the account itself. Analysis of the collected artifacts can be carried out through forensic analysis tools hosted on an EC2 instance, residing inside a dedicated Amazon VPC in the forensics account ideally. This Amazon VPC ought to be configured with exactly the same restrictive approach you’ve taken up to now, being isolated and auditable fully, with the only real resources being focused on the forensic tasks accessible.

This may imply that the Amazon VPC’s subnets shall haven’t any internet gateways, and all S3 access should be done via an &lt therefore;a href=”https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html” target=”_blank” rel=”noopener noreferrer”>S3 VPC endpoint. VPC flow logging ought to be enabled at the Amazon VPC level in order that there are records of most network traffic. Security groups should be restrictive highly, and deny all ports that aren’t linked to certain requirements of the forensic tools. RDP and ssh access ought to be restricted and governed by auditable mechanisms like a bastion host configured to log all activity and connections, AWS Systems Manager Session Manager, or similar.

If using Systems Manager Session Manager with a graphical interface is necessary, RDP or other methods can &lt still;a href=”https://aws.amazon.com/ko/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/” target=”_blank” rel=”noopener noreferrer”>be accessed. Responses and commands performed using Session Manager could be logged to Amazon CloudWatch and an S3 bucket, this enables auditing of most commands executed on the forensic tooling Amazon EC2 instances. Administrative privileges can be restricted&lt also;/a> if required. You can even arrange to get an Amazon Simple Notification Service (Amazon SNS) notification whenever a new session is started.

Considering that the Amazon EC2 forensic tooling instances might possibly not have direct access to the web, you may want to develop a process to preconfigure and deploy standardized Amazon Machine Images (AMIs) with the correct updated and installed group of tooling for analysis. Several guidelines apply for this process. The OS of the AMI ought to be hardened to lessen its vulnerable surface. We do that by you start with an approved OS image, such as for example an AWS-provided AMI or one you have managed and created yourself. Check out remove unwanted programs then, packages, libraries, along with other components. Make sure that all patches-security and updates and otherwise-have been applied. Configuring a host-based firewall is an excellent precaution also, in addition to host-based intrusion detection tools. Furthermore, ensure the attached disks are encrypted always.

If your operating-system is supported, we recommend creating golden images using EC2 Image Builder. Your golden image ought to be updated and rebuilt at the very least monthly, as you intend to ensure it’s kept current with security patches and functionality.

EC2 Image Builder-combined with other tools-facilitates the hardening process; for instance, allowing the creation of automated pipelines that produce Center for Internet Security (CIS) benchmark hardened AMIs. In the event that you don’t want to sustain your own hardened images, you’ll find CIS benchmark hardened AMIs on the AWS Marketplace.

Remember the infrastructure requirements for the forensic tools-such as minimum CPU, memory, storage, and networking choosing a proper EC2 instance type requirements-before. Though a number of instance types can be found, you’ll want to make sure that you’re keeping the proper balance between cost and performance predicated on your minimum requirements and expected workloads.

The purpose of this environment would be to provide an efficient methods to collect evidence, perform comprehensive investigation, and go back to safe operations effectively. Evidence is acquired through the automated strategies discussed in &lt best;a href=”https://aws.amazon.com/blogs/security/how-to-automate-incident-response-in-aws-cloud-for-ec2-instances/” target=”_blank” rel=”noopener noreferrer”>How exactly to automate incident response in the AWS Cloud for EC2 instances. Hashing evidence artifacts upon acquisition is strongly suggested in your evidence collection process immediately. Hashes, and subsequently the evidence itself, could be validated after subsequent transfers and accesses then, ensuring the integrity of the data is maintained. Preserving the initial evidence is essential if legal action is taken.

Artifacts and evidence can contain, but aren’t limited by:

Usage of the control plane logs mentioned above-such because the CloudTrail logs-can be accessed in another of two ways. Ideally, the logs should have a home in a central location with read-only access for investigations as needed. However, or even centralized, read access could be given to the initial logs within the foundation account as needed. Read usage of certain service logs found within the security account, such as for example AWS Config, Amazon GuardDuty, Security Hub, and Amazon Detective, may be essential to correlate indicators of compromise with evidence discovered through the analysis.

As mentioned previously, it’s vital to have immutable versions of most evidence. This is achieved in lots of ways, including but not limited by the next examples:

  • Amazon EBS snapshots, including hibernation generated memory dumps:
    • Original Amazon EBS disks are snapshotted, shared to the forensics account, used to make a volume, and mounted as read-only for offline analysis then.
  • Amazon EBS volumes manually captured:
    • Linux tools such as for example dc3dd may be used to stream a volume to an S3 bucket, in addition to provide a hash, and made immutable utilizing an S3 method from another bullet point.
  • Artifacts stored within an S3 bucket, such as for example memory dumps along with other artifacts:
    • S3 Object Lock prevents objects from being overwritten or deleted for a set timeframe or indefinitely.
    • Using MFA delete requires the requestor to utilize multi-factor authentication to delete an object permanently.
    • Amazon S3 Glacier offers a Vault Lock function if you wish to retain immutable evidence term long.
  • Disk volumes:
    • Linux: Mount in read-only mode.
    • Windows: Use one of the numerous commercial or open-source write-blocker applications available, a few of which are created for forensic use specifically.
  • CloudTrail:
  • AWS Systems Manager inventory:
  • AWS Config data:
    • Automagically, AWS Config stores data within an S3 bucket, and will be protected utilizing the above methods.

Note: AWS services such as for example KMS might help enable encryption. KMS is integrated with AWS services to simplify making use of your keys to encrypt data across your AWS workloads.

A good example use case of Amazon EBS disks being shared as evidence to the forensics account, the next figure-Figure 2-is a simplified S3 bucket folder structure you could utilize to store and use evidence.

Figure 2 shows an S3 bucket structure for a forensic account. An S3 folder and bucket is established to carry incoming data-for example, from Amazon EBS disks-which is streamed to Incoming Data > Evidence Artifacts using dc3dd. The info is copied from there to a folder in another bucket-&lt then;em>Active Investigation > Root Directory > Extracted Artifacts-to be analyzed by the tooling installed on your own forensic Amazon EC2 instance. Also, you can find folders under Active Investigation for just about any investigation notes you make during analysis, along with the final reports, which are discussed by the end of this post. Finally, a folders and bucket for legal holds, where an object lock will be placed to carry evidence artifacts at a particular version.

Figure 2: Forensic account S3 bucket structure

Figure 2: Forensic account S3 bucket structure


Finally, with regards to the severity of the incident, your on-premises network and infrastructure may be compromised. Having an alternative solution environment for the security responders to utilize in case of this event reduces the opportunity of not having the ability to respond within an emergency. Amazon services such as for example Amazon Workspaces-a fully managed persistent desktop virtualization service-can be utilized to supply your responders a ready-to-use, independent environment they can use to gain access to the digital forensics and incident response tools had a need to perform incident-related tasks.

From the investigative tools aside, communications services are being among the most crucial for coordination of response. You should use Amazon WorkMail and Amazon Chime to supply that capability independent of normal channels.


The purpose of a forensic investigation would be to give a final report that’s supported by the data. This includes that which was accessed, who may have accessed it, how it had been accessed, whether any data was exfiltrated, etc. This report could be essential for legal circumstances, such as for example criminal or civil situations or investigations requiring breach notifications. What output each circumstance requires ought to be determined in advance to be able to develop a proper response and reporting process for every. A real cause analysis is essential in providing the info required to ready your resources and environment to greatly help prevent an identical incident in the foreseeable future. Reports ought never to only include a real cause analysis, but supply the methods also, steps, and tools used to reach at the conclusions.

This short article has shown you ways to begin maintaining and creating forensic environments, in addition to enable your teams to execute advanced incident resolution investigations using AWS services. Implementing the groundwork for the forensics environment, as described above, lets you use automated disk collection to begin with iterating on your own forensic data collection capabilities and become better prepared when security events occur.

When you have feedback concerning this post, submit comments in the Comments section below. When you have questions concerning this post, take up a new thread using one of the AWS Security, Identity, and Compliance forums or contact AWS Support.

Want more AWS Security how-to content, news, and show announcements? Follow us on Twitter.