the write ghost

HomeServicesAboutPortfolioContact
Get Started
August 8, 2022

Sample AWS Environment

When it comes to hosting infrastructure in the cloud, the first and most crucial step is planning. While creating a comprehensive plan for a cloud environment can be incredibly daunting, it’s the best way to guarantee your implementation will go as smoothly as possible. In this post, I will walk through how to create a sample environment plan given a pre-existing infrastructure. 

Step 1: Take Inventory (Current State) 

The first step to creating a cloud environment plan is to critically document your application infrastructure’s current state; this means analyzing all existing accounts and their content. 

For simplicity’s sake, I will say that Company X is already up and running in AWS. Company X has even gone so far as to break out workloads into six different accounts: 

1.) Platform-management: This account leverages periphery services, platform-wide services, and security.
2.) Dev: Applications are developed in this account. ‍

3.) DevSecOps: DevOps tooling (Jira, Bitbucket, and Ansible tools) live in this account. 

4.) Staging: Live data is stored here after having gone through development and testing.

5.) Logging: Server logging for all applications is consolidated in this account. These logs include CloudTrail logs (which log API calls from all users) and VPC flow logs (which record access to any resources stored in the VPC). 

6.) Test: This account simulates the production environment by testing applications using production data. 

These accounts are all part of an Organizational Unit under Company X, which allows all accounts to be centrally managed. Each user is also configured with a Switch Role, which will enable them to quickly log in to each account within the organization without having to log in and out repeatedly. 

In each account, the following capabilities are enabled: 

● IAM users: Users are people or applications that interact with AWS. Users are added to appropriate IAM groups, which specify their permissions.
● IAM groups: The current groups include administrators, auditors, developers, power users, and viewers, each with the appropriate permissions. 

● Config: AWS Config is a service that enables the auditing and evaluation of AWS resources. It continuously monitors and records the resource configuration and allows the automation of specific resources. The current Config setup includes the following capabilities: 

        ○ S3 Encryption: Checks that your Amazon S3 buckets do not allow public read access. The Config rule checks the Block Public Access settings,            the bucket policy, and the bucket access control list (ACL). 

        ○ IAM Password Policy: Each IAM user created must have a password that follows a strict, best-practice policy. 

        ○ RDS Storage Encryption: All resources created in the Relational Database Service are encrypted by default. 

        ○ VPC Flow Logs Enabled: A recorded history of access to resources in the VPCs; these logs go to a consolidated bucket via a CloudFormation            template. 

        ○ CloudTrail Enabled: CloudTrail is enabled on any future provisioned EC2 or S3 resources, and logs live in a consolidated bucket via a            CloudFormation template (listed below). 

        ○ Tagging: All future provisioned resources are required to be tagged with the key “Project” and the appropriate value. 

● AWS Inspector: This is an automated security assessment service that helps improve the compliance of applications deployed on AWS by assessing for vulnerabilities. Although it is currently enabled, no assessments are recorded. Inspector is therefore also included in the future state of the environment. 

● CloudFormation Templates: CloudFormation templates provision and manage resources hosted in AWS. The execution of the CloudFormation templates is currently automated via Ansible. Existing deployed CloudFormation templates include: 

        ○ IAM groups: The appropriate IAM groups for users. 

        ○ IAM policies: The applicable policies are attached to each of the IAM groups. 

        ○ CloudTrail: All logs from CloudTrail are directed to an S3 bucket in the logging account. 

        ○ VPC Flow Logs: All VPC Flow Logs are directed to a bucket in the logging account. 

        ○ EC2 Auto-Tagging: All EC2 instances spun up are tagged automatically with the owner, principal ID, and the project. 

● Consolidated Logging: Consolidated logging is configured via CloudFormation templates listed above. VPC flow logs and CloudTrail logs created via CloudFormation template are directed to these buckets through CloudFormation templates. 

Step 2: Note Necessary Improvements (Future State) 

Company X is looking pretty good so far. Here is a list of best practices they have already followed when it comes to hosting in the cloud: 

  • Separate Accounts: By breaking your workloads into different accounts, you can align the ownership and decision-making with those accounts and avoid conflicts in how workloads in other accounts are managed. Another crucial benefit to separate accounts is the ability to apply unique security controls for each environment. 
  • Identity Access Management: By defining and managing the roles and access for users, Company X can easily keep track of employee activity and meet industry compliance standards. 
  • Config Rules: AWS Config can be used for resource administration, compliance, managing configuration changes, and more. By creating Config rules for your resources, you can quickly view which resources violate your rules and view suggestions to remediate.   
  • Automated infrastructure: Company X is using CloudFormation templates automated through Ansible; this means that Company X can quickly deploy changes or create a duplicate backup environment simply by running the Ansible code.   
  • Consolidated logging: Centralized logging greatly simplifies log analysis and correlation tasks. It becomes much easier to troubleshoot issues within your infrastructure or to demonstrate compliance by storing logs in one place. 

Although most of Company X’s architecture is already running in AWS, a few best practices should be implemented for the environment to be considered both secure and efficient. The following improvements should be made to the environment to achieve the desired future state: 

● Security Information & Event Management (SIEM) Solution: A SIEM solution gathers log data from infrastructure and provides a real-time analysis of security alerts. A proper SIEM solution uncovers suspicious activities or other vulnerabilities in your system to prevent an attack; Company X has no SIEM solution in place. Research should be done to choose one and integrate it with current architecture.  

● VPC Peering: VPC Peering improves network security by enabling private connectivity between two or more VPC networks. With multiple VPCs, you should use VPC peering to isolate your network from public traffic. The best way to implement VPC peering for Company X would be to automate the setup using the CloudFormation templates / Ansible. 

● CloudWatch Logs: Company X does not have any CloudWatch events created, and thus, no events are currently documented. It’s essential to record actions taken by EC2 instances in VPCs, so the future state should include CloudWatch logs for these events, similarly automated with CloudFormation / Ansible. 

● S3 Audit Bucket: Company X is in a highly regulated industry, so they need an Audit Bucket for internal audit reviews. This bucket will show logins to EC2 instances, actions taken in RDS instances, and detailed S3 events. Again, Company X should configure this bucket and its contents using CloudFormation / Ansible. 

● AWS Inspector: As mentioned before, Company X has Inspector enabled in each account. Company X should run Inspector events regularly to provide a detailed list of security findings prioritized by level of severity in the future. 

● An AWS Point Person: I strongly recommend that Company X assign an individual to serve as the environment owner for each environment created through automation. This individual will monitor logs and have a thorough knowledge of the AWS infrastructure. 

Step 3: Create Intermediary Steps 

Once you have the current and future states fleshed out, you can begin forming a plan to get from Point A to Point B. Although many people like to create a timeline in this phase, I encourage you to hold off on timelines until you understand (in detail) both the work to be done and the resources available to you. Although the architecture implementation work is paramount, the importance of documentation cannot be overstated. In this section, I’ll talk about the next steps for both the implementation and documentation teams. 

Architecture Work 

As dictated above, the following architecture changes are required: 

  • SIEM: Choose and implement a solution. 
  • VPC Peering: Automate and configure VPC Peering. 
  • CloudWatch Logs: Automate and configure logging for EC2. 
  • S3 Audit Bucket: Automate and configure a bucket for Audit trails. 
  • AWS Inspector: Configure and run regular Inspector assessments.  
  • AWS Point Person: Assign and train someone to own all AWS environments. 

I already briefly went over these implementation steps, so I’ll leave the architecture section as is. Ideally, these steps will be broken down into individual tasks and assigned to engineers through a task management system like Jira.  

Documentation 

During the implementation phase, someone should be working alongside engineers to document the steps to configure the AWS environment. Documentation enables the ease of environment replication and provides an in-depth, consolidated resource. In this case, the documentation should include:  

● Architectural overview: An overview of all resources provisioned in the environment and all architectural diagrams of how they interact. 

● Configuration: A step-by-step configuration guide for each resource in the AWS environment; this includes details about the IAM users, groups, and roles. 

● Automated Setup: The steps to run the CloudFormation templates created to automate certain infrastructure using the CLI and Ansible. 

● Logging: This section details the purpose, use, and maintenance of CloudTrail, CloudWatch, and VPC Flow Logs. 

● Security measures: All security measures taken to enforce compliance and limit vulnerabilities. Include all NACL templates, the SIEM solution, and information on AWS Inspector. 

● Long-term account management: The AWS environment must be maintained after the initial standup. This section details the importance of monitoring logs and CloudWatch, and how to ensure resources are all compliant with Config rules.

  • AWS Environment Owner: A separate document detailing the role and responsibility of the AWS Environment Owner should be created to ease the onboarding with this position. 

 

By devoting more time to your research and planning stages, you can save yourself weeks in implementation. Instead of modularly improving your architecture, I encourage you first to give it a comprehensive overview.  By becoming acquainted with how your services interact, you will quickly see where your architecture is lacking. You can create your own improvement plan with a little research and avoid paying for pricey consultants who may not understand your big picture. Like many things in life, cloud hosting is mostly about just one thing: preparation. 

‍

Latest Posts

June 23, 2025
Mastering SEO for Your Tech Marketing Content

Your strategy is airtight, but your web analytics leave something to be desired. What now?

March 5, 2025
Building a Strong Foundation for Your Tech Company's Marketing

Sitting on a brilliant tech product but struggling to find leads that actually lead anywhere? We can help.

January 23, 2025
How Prove is Revolutionizing Developer Tools for Identity Verification

In this post, we summarize Prove's service offerings succinctly, giving prospective customers a clear picture of how Prove can best serve them.

© 2024 The Write Ghost LLC, All rights reserved.
FeaturesPortfolio
AboutContact