Mitigate data leakage by using AppStream 2.0 and end-to-end auditing

Customers desire to use AWS providers to operate on the most sensitive data, however they want to ensure that only the ideal people have usage of that data. Once the right folks are accessing data even, customers want to take into account what actions those customers got while accessing the info.

In this article, we show you ways to use Amazon AppStream 2.0 to grant isolated usage of sensitive data and reduce your attack surface. Furthermore, you are demonstrated by us how exactly to achieve end-to-end auditing, which is made to provide complete traceability of most activities around your computer data.

To show this basic idea, we built an example solution that delivers a data scientist with usage of an Amazon SageMaker Studio notebook making use of AppStream 2.0. The answer deploys a fresh Amazon Virtual Private Cloud (Amazon VPC) with isolated subnets, where in fact the SageMaker AppStream and notebook 2.0 instances are create.

Why AppStream 2.0?

AppStream 2.0 is really a fully-managed, non-persistent application and desktop computer streaming service that delivers access to desktop computer applications from anywhere through the use of an HTML5-compatible desktop computer browser.

Each right period you start an AppStream 2.0 session, a freshly-built, pre-provisioned example is provided, utilizing a prebuilt image. Once you close your program and the disconnect timeout time period is attained, the example is terminated. This enables you to control an individual experience and really helps to ensure a frequent carefully, secure environment each correct period. AppStream 2.0 lets you enforce limitations on user classes also, such as for example disabling the clipboard, document transfers, or printing.

Moreover, AppStream 2.0 uses AWS Identity and Access Management (IAM) functions to grant fine-grained usage of other AWS solutions such as for example Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon SageMaker, along with other AWS services. Thus giving you both control on the access in addition to an accounting, via Amazon CloudTrail, of what activities were taken so when.

These features help make AppStream 2.0 suitable for environments that need high safety and isolation uniquely.

Exactly why SageMaker?

Developers and data researchers use SageMaker to create, train, and deploy device learning models rapidly. SageMaker does the majority of the work of every step of the device learning process to greatly help customers develop high-quality versions. SageMaker accessibility from within AppStream 2.0 provides your computer data researchers and analysts with a suite of standard and familiar data-science deals to utilize against isolated data.

Remedy architecture overview

This solution allows a data scientist to utilize a data set while linked to an isolated environment that doesn’t have an outbound way to the internet.

First, you construct an Amazon VPC with isolated subnets sufficient reason for no internet gateways attached. This means that any instances stood up in the surroundings have access to the web don’t. To supply the resources in the isolated subnets with a way to commercial AWS providers such as for example Amazon S3, SageMaker, AWS System Manager you build VPC endpoints and attach them to the VPC, as shown inside Figure 1.

Figure 1: Network Diagram

Number 1: Network Diagram

Afterward you build an AppStream 2.0 fleet and stack, and attach a security group and IAM part to the fleet. The objective of the IAM function is to supply the AppStream 2.0 instances with gain access to to downstream AWS companies like as Amazon SageMaker and S3. The IAM part design follows the least privilege design, to make sure that only the gain access to required for each job is granted.

During the constructing of the stack, you’ll enable AppStream 2.0 Home Folders. This function builds an S3 bucket where customers can store data files from of their AppStream 2.0 program. The bucket was created with a separate prefix for every user, where just they will have access. We utilize this prefix to shop the user’s pre-signed SagaMaker URLs, making certain no-one user can entry another customers SageMaker Notebook.

Afterward you deploy a SageMaker notebook for the info scientist to use to gain access to and analyze the isolated information.

To confirm that an individual ID in the AppStream 2.0 program hasn’t been spoofed, you create an AWS Lambda functionality that compares an individual ID of the info scientist contrary to the AppStream 2.0 session ID. If an individual program and ID ID complement, this indicates that an individual ID hasn’t already been impersonated.

The session has been validated once, the Lambda function generates a pre-signed SageMaker URL that provides the data scientist usage of the notebook.

Finally, you enable AppStream 2.0 usage reviews to make sure that you possess end-to-end auditing of one’s environment.

To assist you deploy this solution into your environment easily, we’ve built an AWS Cloud Development Kit (AWS CDK) app and stacks, making use of Python. To deploy this answer, you can go directly to the Solution deployment section inside this blog post.

Note: this solution was constructed with all resources getting within a AWS Region. The assistance of multi Region can be done but isn’t section of this blog post.

Solution requirements

Before a solution is made by you, you need to know your security needs. The solution in this article assumes a couple of standard security specifications that you generally find within an enterprise environment:

  • Consumer authentication is supplied by a Security Assertion Markup Language (SAML) identity service provider (IdP).
  • IAM roles are accustomed to access AWS services such as for example Amazon SageMaker and S3.
  • AWS IAM access keys and key keys are prohibited.
  • IAM policies follow minimal privilege model in order that only the mandatory access is given.
  • Windows clipboard, document transfer, and publishing to local gadgets is prohibited.
  • Auditing and traceability of most activities is necessary.

Note: before you can integrate SAML with AppStream 2.0, you will have to follow the AppStream 2.0 Integration with SAML 2.0 guideline. There are several steps and it’ll take some right time and energy to set up. SAML authentication will be optional, however. If you need to prototype the answer and observe how it works just, you can certainly do that without allowing SAML integration.

Solution components

This solution uses the next technologies:

  • Amazon VPC – has an isolated network where in fact the solution will undoubtedly be deployed.
  • VPC endpoints – provide accessibility from the isolated system to commercial AWS solutions such as for example Amazon S3 and SageMaker.
  • AWS Techniques Manager – shops parameters such as for example S3 bucket brands.
  • AppStream 2.0 – offers hardened instances to perform the perfect solution is on.
  • AppStream 2.0 home folders – store users’ program information.
  • Amazon S3 – shops software scripts and pre-signed SageMaker URLs.
  • SageMaker laptop – provides data researchers with tools to gain access to the data.
  • AWS Lambda – works scripts to validate the info scientist’s program, and generates pre-signed URLs for the SageMaker laptop.
  • AWS CDK – deploys the answer.
  • PowerShell – procedures scripts on AppStream 2.0 Microsoft Home windows instances.

Solution high-level procedure and design stream

The following figure is really a high-degree depiction of the perfect solution is and its own process flow.

Body 2: Solution process stream

Figure 2: Solution process circulation

The procedure flow-illustrated in Figure 2-is:

  1. A information scientist clicks on an AppStream 2.0 federated or perhaps a streaming URL.
    1. If it’s the federated URL, the info scientist authenticates utilizing their corporate credentials, and also MFA if required.
    1. If it’s the streaming URL, no more authentication is necessary.
  2. The information scientist is offered a PowerShell application that’s been distributed around them.
  3. After beginning the application, the PowerShell is started because of it script on an AppStream 2.0 instance.
  4. The script then:
    1. Downloads another PowerShell script from an S3 bucket.
    2. Collects local AppStream 2.0 environment variables:
      1. AppStream_UserName
      2. AppStream_Session_ID
      3. AppStream_Resource_Name
    3. Shops the variables inside the session.json copies and file the document to the house folder of the program on Amazon S3.
  5. The PUT event of the JSON file in to the Amazon S3 bucket triggers an AWS Lambda function that performs the next:
    1. Reads the session.json document from the user’s house folder about Amazon S3.
    2. Performs a describe action contrary to the AppStream 2.0 API to make sure that the program ID and an individual ID match. This can help to prevent an individual from manipulating the neighborhood environment adjustable to pretend to end up being someone else (spoofing), and access unauthorized data potentially.
    3. If the session ID and user ID fit, a pre-signed SageMaker URL is stored and generated in program_url.txt, and copied to the user’s home folder in Amazon S3.
    4. If the session user and ID ID usually do not match, the Lambda function finishes without generating a pre-signed URL.
  6. When the PowerShell script detects the program_url.txt file, the URL is opened because of it, giving the user usage of their SageMaker laptop.

Code structure

To assist you deploy this solution in your environment, we’ve constructed a couple of code which you can use. The code is composed in Python and for the AWS CDK framework mostly, sufficient reason for an AWS CDK program plus some PowerShell scripts.

Note: We’ve chosen the default configurations on most of the AWS assets our program code deploys. Before deploying the program code, you should conduct an intensive code review to guarantee the resources you’re deploying match your organization’s requirements.

AWS CDK application – ./app.py

To create this application lightweight and modular, we’ve structured it within separate AWS CDK nested stacks:

  • vpc-stack – deploys a VPC with 2 isolated subnets, alongside 3 VPC endpoints.
  • s3-stack – deploys an S3 bucket, copies the AppStream 2.0 PowerShell scripts, and shops the bucket name within an SSM parameter.
  • appstream-service-roles-stack – deploys AppStream 2.0 service roles.
  • appstream-stack – deploys the AppStream 2.0 fleet and stack, along with the needed IAM protection and roles groups.
  • appstream-start-fleet-stack – builds a custom reference that starts the AppStream 2.0 fleet.
  • notebook-stack – deploys a SageMaker notebook, alongside IAM roles, security groups, and an AWS Key Management Service (AWS KMS) encryption essential.
  • saml-stack – deploys a SAML role as a placeholder for SAML authentication.

PowerShell scripts

The solution uses the next PowerShell scripts in the AppStream 2.0 instances:

  • sagemaker-notebook-launcher.ps1 – This script is portion of the AppStream 2.0 picture and downloads the sagemaker-notebook.ps1 script.
  • sagemaker-laptop.ps1 – starts the procedure of validating the session and generating the SageMaker pre-signed URL.

Note: Having the minute script reside on Amazon S3 provides flexibility. It is possible to change this script without needing to create a brand-new AppStream 2.0 image.

Deployment Prerequisites

To deploy this solution, your deployment environment have to meet up with the following prerequisites:

Note: We used AWS Cloud9 with Amazon Linux 2 to check this solution, since it comes preinstalled with the majority of the prerequisites for deploying this solution.

Deploy the remedy

That you know the look and components now, you’re prepared to deploy the solution.

Note: Inside our demo option, we deploy two stream.standard.little AppStream 2.0 instances, using Windows Server 2019. Thus giving you an acceptable example to function from. Is likely to environment you might need more instances, another instance type, or perhaps a different edition of Windows. Also, we deploy an individual SageMaker notebook example of type ml.t3.medium. To improve the AppStream 2.0 and SageMaker instance varieties, you will have to modify the stacks/data_sandbox_appstream.stacks/data_sandbox_notebook and py.py respectively.

Phase 1: AppStream 2.0 picture

An AppStream 2.0 image contains applications that you could stream to your users. It’s what enables you to curate the user knowledge by preconfiguring the configurations of the apps you stream to your customers.

To build an AppStream 2.0 image:

  1. Build an image following Create a Custom AppStream 2.0 Image utilizing the AppStream 2.0 Console tutorial.

    Note: In Stage 1: Install Apps on the Picture Builder inside this tutorial, you will be asked to choose an example family. For this illustration, we chose General Objective. If you select a different Instance loved ones, you will have to make certain the appstream_instance_kind specified under Step 2: Code modification is definitely of exactly the same family.

    In Step 6: Finish Creating Your Picture in this tutorial, you can be asked to provide a distinctive image name. Make a note of the image title as you’ll need it in Step 2 of the blog post.

  2. Duplicate notebook-launcher.ps1 to a spot on the image. We advise that it really is copied by one to C:AppStream.
  3. In Step 2-Create an AppStream 2.0 Application Catalog-of the tutorial, make use of C:WindowsSystem32Windowspowershellv1.0powershell.exe because the application, and the road to notebook-launcher.ps1 because the launch parameter.

Note: While testing the application through the image building procedure, the PowerShell script shall fail as the underlying infrastructure isn’t present. It is possible to ignore that failure through the image building process.

Action 2: Code modification

Next, you need to modify a few of the code to suit your environment.

Make the following shifts in the cdk.json document:

  • vpc_cidr – Supply your selected CIDR vary to be utilized for the VPC.

    Note: VPC CIDR ranges are usually your personal IP space and therefore can contain any valid RFC 1918 range. Nevertheless, if the VPC you’ve planned on making use of for AppStream 2.0 must connect to other areas of your private system (on premise or additional VPCs), you should choose a variety that will not conflict or overlap with the others of your infrastructure.

  • appstream_Image_name – Enter the image title you chose once you built the Appstream 2.0 image in Step 1.a.
  • appstream_environment_name – The surroundings name is strictly aesthetic and drives the naming of one’s AppStream 2.0 fleet and stack.
  • appstream_example_type – Enter the AppStream 2.0 instance type. The example type must be section of the same example family you found in Step 1 of the To construct an AppStream 2.0 picture section. For a summary of AppStream 2.0 instances, check out https://aws.amazon.com/appstream2/pricing/.
  • appstream_fleet_type – Enter the fleet type. Allowed values About_DEMAND are ALWAYS_In or.
  • Idp_name – When you have built-in SAML with this particular solution, you will have to enter the IdP title you chose when making the SAML provider inside the IAM Gaming console.

Phase 3: Deploy the AWS CDK application

The CDK application deploys the CDK stacks.

The stacks include:

    • VPC with isolated subnets
    • VPC Endpoints for S3, SageMaker, and Techniques Manager
    • S3 bucket
    • AppStream 2.0 fleet

and stack

  • Two AppStream 2.0 stream.standard.little instances
  • A solo SageMaker ml.t2.moderate notebook

Run the next commands in order to deploy the AWS CDK app:

  1. Install the AWS CDK Toolkit.
    npm install -g aws-cdk
  2. Create and activate the virtual environment.
    python -m venv .datasandbox-env
    source .datasandbox-env/bin/activate
  3. Modification directory to the main folder of the program code repository.
  4. Install the mandatory packages.
    pip install -r requirements.txt
  5. If you haven’t used AWS CDK in your account however, run:
    cdk bootstrap
  6. Deploy the AWS CDK stack.
    cdk deploy DataSandbox

Stage 4: Test the solution

Following the stack has deployed, around 25 minutes for the AppStream 2 allow.0 fleet to attain a working state. Tests will fail if the fleet isn’t running.

Without SAML

In the event that you haven’t added SAML authentication, utilize the following steps to check the solution.

  1. In the AWS Management Console, head to AppStream 2.0 and to Stacks.
  2. Choose the stack, and select Action then.
  3. Select Create streaming URL.
  4. Enter any consumer name and choose Get URL.
  5. Enter the URL inside another tab of one’s test and browser the application.


If you work with SAML authentication, you shall possess a federated login URL you need to visit.

If everything is functioning, your SageMaker notebook will be launched as shown in Figure 3.

Figure 3: SageMaker Laptop

Figure 3: SageMaker Notebook

Note: if you get a browser timeout, verify that the SageMaker notebook computer instance “Data-Sandbox-Notebook” happens to be in InService status.


Auditing for this alternative is provided through AWS CloudTrail and AppStream 2.0 Usage Reviews. Though CloudTrail is enabled automagically, to collect and shop the CloudTrail logs, you need to develop a trail for your AWS account.

The following logs will be available for one to use, to provide auditing.

  • Login – CloudTrail
    • User ID
    • IAM SAML function
    • AppStream stack
  • S3 – CloudTrail
    • IAM role
    • Source IP
    • S3 bucket
    • Amazon S3 item
  • AppStream 2.0 instance IP address – AppStream 2.0 usage reports

Connecting the dots

To get a precise notion of your users’ activity, you need to correlate quite a few logs from different providers. First, the login be collected by you information from CloudTrail. This gives you an individual ID of an individual who logged in. You gather the Amazon S3 place from CloudTrail then, gives you the Ip of the AppStream 2.0 instance. And lastly, the AppStream is collected by you 2.0 usage report gives you the Ip of the AppStream 2.0 instance, in addition to the user ID. This enables you to connect an individual ID to the experience on Amazon S3. For auditing & managing exploration routines with SageMaker, please go to this GitHub repository.

Though the logs are increasingly being collected automatically, what we’ve shown you is really a manual method of sifting through those logs here. For a far more robust answer on analyzing and querying CloudTrail logs, visit Querying AWS CloudTrail Logs.

Costs of the Solution

The price for running this solution depends on a true amount of factors just like the instance size, the amount of information you store, and just how many hours the solution can be used by you. AppStream 2.0 is charged per instance hr and there’s one instance inside this example remedy. You can see information on the AppStream 2.0 pricing page. VPC endpoints are usually charged by the entire hour and by just how much information passes through them. You can find three VPC endpoints in this option (S3, System Supervisor, and SageMaker). VPC endpoint prices is referred to on the Privatelink pricing page. SageMaker Notebooks are charged in line with the true amount of instance hrs and the instance kind. There’s one SageMaker example in this alternative, which may be qualified to receive free tier pricing. Start to see the SageMaker pricing page for additional information. Amazon S3 storage prices depends on just how much information you store, what sort of storage you utilize, and how much information transfers in and out of S3. The utilization in this solution may be qualified to receive free tier pricing. You can see information on the S3 pricing page.

Before deploying this solution, ensure that you calculate your cost utilizing the AWS Pricing Calculator, and the AppStream 2.0 pricing calculator.


Congratulations! You possess deployed a solution that delivers your users with usage of sensitive and isolated information in a secure way using AppStream 2.0. You have applied a mechanism that’s designed to prevent consumer impersonation also, and enabled end-to-finish auditing of most user activities.

To learn about how exactly Amazon is using AppStream 2.0, go to the post How Amazon uses AppStream 2.0 to provide information analysts and researchers with gain access to to sensitive data.

Should you have feedback concerning this post, submit remarks in the Comments section below.

Want a lot more AWS Security how-to articles, news, and show announcements? Stick to us on Twitter.

%d bloggers like this: