AWS Cloud Computing ☁️
Engaging in cloud infrastructure, serverless architecture, identity and access management, compute services, and prod deployment strategies.
Publicly sharing my journey learning AWS and DevOps practices.
View My ProgressEngaging in cloud infrastructure, serverless architecture, identity and access management, compute services, and prod deployment strategies.
Implementing the SDLC phases, CI/CD pipelines,automation, monitoring solutions, and infrastructure as Code (IaC) using Terraform & Ansible.
Managing cloud infrastructure with Terraform — modules, state, and reusable IaC patterns for production-ready deployments.
Explored how AWS delivers global cloud services through its worldwide regions, Availability Zones, and large-scale presence.
Initialize a GitHub repository to host my initial AWS project files and track changes.
Learned Identity and Access Management (IAM) fundamentals and best practices for managing users, permissions, and access securely.
Reviewed common DevOps interview questions to prepare for both upcoming final-round interviews.
Created an IAM group and assigned my user to it to avoid using the root account, following best practices.
Completed a final-round interview for a DevOps Engineer position at a Fortune 500 tech company.
Created my first AWS budget to monitor and control costs, ensuring no unexpected charges. For temporary reasons, I set the limit to $1.
Completed a separate, final-round interview for a DevOps Engineer position at a Fortune 500 tech company.
Created an S3 bucket with all public access blocked and allowed only my IP via a bucket policy (IP whitelist) for controlled access.
Reviewed AWS Certified Cloud Practitioner study materials and practice exams in preparation for the certification exam.
Configured my own host using AWS Lightsail to serve content and experiment with deployment options.
No new updates for DevOps today.
Paid the $15 to initialize the domain name transfer from GoDaddy to AWS Route 53 for better management and integration with other AWS services.
No new updates for DevOps today.
Used the AWS Pricing Calculator to estimate costs for an EC2 instance running Amazon Linux 2023 (x86) in the US-West-1 (N. California) region, closer to my current location.
Reviewed AWS Certified Cloud Practitioner study materials and practice exams in preparation for the certification exam.
Launched an EC2 instance running Amazon Linux 2023 (x86) to support my GoDaddy to AWS migration plan.
Review Ansible documentation to better assist me with my current infrastructure as code (IaC) Ansible playbook used for provisioning my EC2 instance.
Continued with my current S3 bucket project, copying files from my local dev environment to the S3 bucket that I configured for this current project.
Updated my existing Ansible playbook to include additional tasks for configuring my EC2 instance upon launch.
Reviewed AWS Certified Cloud Practitioner study materials and practice exams in preparation for the certification exam.
Started researching GitLab as a potential alternative to GitHub for CI/CD and industry standard practices using the SDLC model.
Officially received the AWS Certified Cloud Practitioner certification today after the highly anticipated passing status.
Created my GitLab account and imported 3 existing GitHub repositories to explore GitLab's CI/CD capabilities.
Reconfigured my AWS Budget from a $15 limit to a $25 limit to better monitor my spending as I continue to explore AWS services.
No new updates for DevOps today.
Configured an AWS SSL/TLS Certificate to establish a secure SSL/TLS connection for this website using AWS Certificate Manager.
No new updates for DevOps today.
Configured CloudFront distribution to serve website content securely and efficiently via CDN.
Designed a version control hierarchy chart for the current version control branch structure.
Create a CloudFront Invalidation to refresh cached content after updates to the origin S3 bucket.
Implemented the version control branch structure as per the designed hierarchy chart.
Initialize a new front-end and back-end Github repo to store my front and back-end code for the Cloud Resume Challenge project.
Launch AWS EC2 instances with baseline and peak load configurations to simulate real-world usage patterns. Configure auto-scaling groups to manage traffic spikes effectively.
No new updates for DevOps today.
Create and configure VPCs with public and private subnets, NAT gateways, security groups, and network ACLs to ensure secure and efficient network architecture.
No new updates for DevOps today.
Establish the requirements and initial project plan for the Cloud Resume Challenge, outlining the necessary AWS services and architecture needed to complete the challenge.
Transfer requirements to a requirements template and begin setting up the necessary AWS services, including S3 for static website hosting.
Initialized mirroring between GitLab and GitHub repositories to ensure synchronization and backup of codebases. Create different tasks in GitLab for the corresponding requirements per the Cloud Resume Challenge.
Migrate this update website from an AWS S3 bucket object to my EC2 reserved instance server for improved performance, control, and ability to run server-side scripts.
Create a play within my current Ansible playbook to automate the installation and renewal of SSL certificates using Certbot.
Configure new security groups and update DNS settings to point to the new AWS S3 bucket as per the Cloud Resume Challenge.
Reconfigure CloudFront distribution settings that were affecting an issue with the SSL/TLS certificate after migration.
Renamed the GitLab and GitHub repositories for the Cloud Resume Challenge to ensure proper CI/CD integration. A single push to the GitLab origin pushes to the mirrored GitHub repo as well. This project requires separate front & back-end repositories.
Review the previous month's cost and usage reports to monitor AWS service consumption.
Create a pipeline for the front-end deployment as part of the Cloud Resume Challenge.On push to origin main, local dev files sync to my AWS S3 bucket object hosting the CRC.
Rest day / See Full Stack
Diagnose and resolve issues in the front-end deployment pipeline as part of the Cloud Resume Challenge. Ensure that on push to origin main, local dev files sync correctly to the AWS S3 bucket hosting the CRC.
Rest day / See Full Stack
No new updates for DevOps today.
Conduct research on AWS Serverless Application Model (SAM) and SAM CLI to understand how to build and deploy serverless applications as part of the Cloud Resume Challenge.
After some misconfigurations in the backend repository, I created a new backend repo and reconfigured Git settings to ensure proper version control and deployment workflows.Upon pushing to origin (GitLab), commits will also be synchronized to GitHub.
Initialized the .aws-sam package using AWS SAM CLI to prepare for building and deploying serverless applications as part of the Cloud Resume Challenge.
Review the plan to ensure the architecture meets the project requirements and follows best practices. Ensure scalability, security, and maintainability are addressed.
I was struggling today with some configuration issues in the AWS SAM template.yaml file, specifically with using Docker, the SAM template, and the build process. Spent more of the day troubleshooting and researching solutions to get the build process working correctly.
Started to learn Docker and explored its use in containerization for mimicking a DynamoDB environment locally.
Review the AWS SAM documentation and troubleshoot issues with the build process to successfully deploy serverless applications as part of the Cloud Resume Challenge.
Continue testing my Python Lambda function locally using SAM CLI and Docker to ensure it interacts correctly with the DynamoDB table as part of the Cloud Resume Challenge.
Configured the last remaining settings and resolved build issues to successfully deploy the AWS SAM template for the Cloud Resume Challenge. This includes setting up the necessary IAM roles, policies, and environment variables for the Lambda function to interact with the on demand DynamoDB table.
Continue working on the RESTful API integration with Lambda and Docker to enhance the Cloud Resume Challenge project.
No new updates for AWS today.
No new updates for DevOps today.
See Full Stack.
See Full Stack.
Troubleshot and resolved issues with the backend deployment of the Cloud Resume Challenge, ensuring that the Lambda function and DynamoDB table are functioning correctly, while ensuring front-end and back-end integration is seamless.
Cleaned up old branches and organized the repository for better collaboration.
See Full Stack.
See Full Stack.
Review the current month's AWS cost and usage reports to monitor service consumption and ensure budget adherence. Identify any unexpected charges and optimize resource usage as needed.
Created a GitLab CI/CD pipeline to automate the build, test, and deployment processes, improving development efficiency and reducing manual errors. Current jobs run on merge requests and pushes to the main branch.
Review the AWS SAM template configuration while troubleshooting pipeline jobs to ensure proper deployment of serverless applications as part of the Cloud Resume Challenge.
Finalized the GitLab CI/CD pipeline with the minimum CRC requirements prior to adding additional stages for linting and testing. Testing, linting (sam validate), and sam build all run on merge requests and pushes to the main branch. Sam deploy runs whenever changes are pushed to origin main.
No new updates for AWS today.
Started creating an entirely new front-end pipeline configuration for the GitLab CI/CD process using the gitlab-ci.yml file.
No new updates for AWS today.
No new updates for DevOps today.
No new updates for AWS today.
Finalized the GitLab CI pipeline for the CRC front-end repo. After some issues regarding the AWS S3 bucket synchronization, everything is now working smoothly. I had to adjust the directory structure to ensure proper syncing.
No New updates for AWS today.
Start drafting a blog post plan to document the process and learnings from completing the Cloud Resume Challenge, including key steps, challenges faced, and solutions implemented. Decided to provision new AWS resources to support the blog infrastructure, instead of the random/recommended 3rd party blogging services.
Prep for an upcoming interview by reviewing common AWS-related questions and scenarios to demonstrate my knowledge and experience with AWS services.
Continue writing the blog post to document the process and learnings from completing the Cloud Resume Challenge, including key steps, challenges faced, and solutions implemented.
Prep for an upcoming interview by reviewing common AWS-related questions and scenarios to demonstrate my knowledge and experience with AWS services. Created a runtime architecture diagram to visually represent the components and interactions of the Cloud Resume Challenge project.
Created a runtime architecture diagram to visually represent the components and interactions of the Cloud Resume Challenge project. I created the front and backend architecture diagrams to provide a comprehensive overview of the system's design and deployment.
Finalized and completed the Cloud Resume Challenge project, ensuring all requirements were met and the application is fully functional and deployed. Blog has been posted on dev.to and linked in the resume.
Finished the remainder of the runtime and CI / CD pipeline diagrams to provide a comprehensive overview of the system's design and deployment.
No new updates for AWS today.
Removed stale and old branches from the Git repository to maintain a clean and organized codebase. Current branches include main, html, css, javascript, and daily-updates.
No new updates for AWS today.
Review Linux commands and system administration basics in preparation for an upcoming Cloud Systems Administrator interview.
Review the current MTD AWS cost and usage reports to monitor service consumption and ensure budget adherence. Identify any unexpected charges and optimize resource usage as needed. No unexpected charges were found.
I started creating my GitLab CI pipeline configuration to automate the mirroring of repositories and streamline the deployment process. My GitLab repo will continue to serve as the primary source of truth for code changes. GitHub will continue to receive mirrored updates per my pipeline.
Plan the infrastructure and architecture for a CloudOps status page to monitor and display the health and performance of various cloud services and applications. The initial three endpoints will monitor this website, resume.michael-burbank.com, and tattoosbyeder.com.
Finished the initial GitLab CI pipeline configuration to automate the mirroring of repositories and streamline the deployment process. The job is now set up to run automatically on code changes.
I started creating the initial infrastructure and architecture for the CloudOps status page using Terraform. The AWS services that are currently being provisioned include S3 and CloudFront for the current time being. See DevOps for more details.
Initialized the Terraform configuration for provisioning AWS resources including S3 and CloudFront to support the CloudOps status page. I am currently having issues with configuring the CloudFront distribution settings. Origin Access Control (OAC) is a new feature that replaces Origin Access Identity (OAI) for securing access to S3 buckets. But this has been challenging to implement correctly in Terraform.
Today, I created the AWS CloudFront distribution to serve the CloudOps status page from the S3 bucket. Configured the necessary settings for caching and security to ensure optimal performance.
Continued with the Terraform configuration for provisioning AWS resources including S3 and CloudFront to support the CloudOps status page. I made progress on configuring the CloudFront distribution settings, focusing on implementing Origin Access Control (OAC) to secure access to the S3 bucket. The S3 bucket and CloudFront distribution are now fully provisioned and secured using Terraform.
I reviewed and tightened my AWS security group inbound and outbound rules to enhance the security of my cloud resources. Implemented the principle of least privilege by restricting access to only necessary ports and IP addresses. This helps to minimize potential attack vectors and improve overall security posture.
I had to modify the overall project requirements to accommodate for a more efficient backend infrastructure. I created a new architectural approach: I will use AWS S3 to serve the frontend static website files, while using an AWS t3.small EC2 instance to handle backend processing and API requests. This hybrid approach will strengthen the overall process. I used Terraform to provision and manage the T3.small EC2 instance and related resources. Enabled monitoring, configured security groups, created a resource to configure an elastic IP address, and subnets.
Provisioned the AWS t3.small EC2 instance using Terraform to handle backend processing and API requests for the CloudOps status page. Configured the instance with necessary security groups, monitoring, and an elastic IP address to ensure reliable and secure operation. See DevOps for more details.
Provisioned the AWS t3.small EC2 instance and related resources using Terraform and Ansible. Started creating the Ansible config files and requirements.txt / .yml. I created the inventory.ini file to manage the EC2 instance configuration and deployment.
No new updates for AWS today.
I continued reviewing Linux commands and Ubuntu system administration concepts to prepare for an upcoming interview. Between Oracle's VirtualBox and the VMware Fusion virtual machines, I prefer VMware Fusion for its better integration with macOS and more advanced features. Using the VMware Fusion VM, I reviewed permissions, users, groups, logs (journalctl), installed different packages such as The Apache HTTP Server (and others), along with other topics.
No new updates for AWS today.
Continued reviewing Linux system admin tasks, commands, and concepts to prepare for an upcoming interview. Despite using Linux nearly everyday, I still have some weaknesses to address. I will be ready for Friday. I also have an interview Wednesday with a smaller, local company. I also started prepping for this interview as well.
No new updates for AWS today.
No new updates for DevOps today.
Reviewed AWS cost management and budgeting tools to optimize expenses. I created a new budget that tracks monthly spending and alerts me when approaching the $30 limit.
I used Ansible to automate the deployment of the AWS EC2 instance that is being used for the backend of the CloudOps Status Page project. I decided to run NGINX again on this instance to serve the status page efficiently.
Prepping for a Cloud interview by reviewing key AWS services and best practices.
No new updates for DevOps today.
Reviewed AWS Cost Explorer and budgeting tools to optimize and control cloud spending to remain within my configured budget(s). Shut down my on demand test EC2 instance to avoid unnecessary charges.
No new updates for DevOps today.
Reviewed the AWS Well-Architected Framework to ensure best practices in cloud architecture and improve system reliability, security, and performance efficiency. I also needed to review these standards to ensure my cloud infrastructure aligns with AWS best practices.
No new updates for DevOps today.
Reviewed the CIA Triad principles of Confidentiality, Integrity, and Availability to understand their importance in information security and how they apply to AWS cloud services. Also reviewed encryption and hashing techniques to enhance data protection.
No new updates for DevOps today.
No new updates for AWS today.
Started exploring Cisco Packet Tracer to simulate network configurations and understand networking concepts better. This was a fun software application to use, enhancing my practical skills in network design and troubleshooting. Not only was the realtime simulation helpful, but it also provided a safe environment to experiment with different network setups.
Conducted the final interview for a Cloud Systems Administrator position. The interview went well, and I am optimistic about the outcome. But regardless of the result, I am proud of the progress I've made and the skills I've developed throughout this learning journey. I will continue to build on this foundation and pursue new opportunities for growth and development in the cloud and DevOps space.
Continued writing Ansible to configure the AWS EC2 instance that is being used for the backend of the CloudOps Status Page. I am considering replacing the current S3 setup hosting the frontend website files with a more robust solution, such as hosting both the front and backend on the EC2 instance.
Troubleshot and resolved an issue with my EC2 elastic IP. The IP address was not properly associated with the instance, which was causing connectivity issues when attempting to connect to the instance over SSH. Bug fixed and working as intended.
Fixed a bug related to the Ansible inventory.ini file that was causing incorrect host configurations. The issue has been resolved, and the deployment process is now functioning correctly. I also created the deploy.sh file to automate the deployment process.
I accepted a job offer today for a Cloud Systems Administrator position. This is an
important milestone in my cloud and DevOps journey and reflects the consistency,
hands-on experience, and growth developed throughout this process.
I am grateful
for the
opportunities that helped me build these skills, and I look forward to contributing to
my new team while continuing to expand my expertise.
Modified the overall project requirements to accommodate for different endpoints I will be monitoring. I will be monitoring the main Arizona State Parks website (azstateparks.com) for the initial starting endpoint, instead of the previously planned endpoints. This change will allow for a more comprehensive monitoring of a real-world website with significant traffic and complexity, providing better insights into the performance and health of the monitored services.
Cleaned up the local and remote branches to ensure a more organized and efficient workflow. This involved removing outdated branches, merging necessary changes, and aligning the branch structure with the current project requirements. Created a plan to focus on specific resource groups for more targeted monitoring and management.
Took a rest day to recharge and spend time with loved ones. I also outlined next week's goals across AWS, DevOps, and Full Stack so I can start the week with clear priorities.
Dug into AWS CloudWatch to strengthen my overall understanding.
No new updates for DevOps today.
Started reading the AWS Certified Solutions Architect Study Guide book written by Ben Piper and David Clinton to continue strengthening my overall AWS knowledge and prepare for my AWS Solutions Architect Associate certification exam. I also started exploring AWS CloudWatch in more depth to understand its capabilities for monitoring and observability of AWS resources and applications. I am planning to integrate CloudWatch into my existing projects to enhance monitoring and alerting capabilities.
Review my current pipelines to ensure they are running smoothly and efficiently.
I had to create an AWS IAM role for my EC2 instance to allow it to send logs and metrics to AWS CloudWatch. I defined the necessary permissions for the role, attached it to my EC2 instance prior to shutting down the instance, and then powered it back on. This allowed the instance to retain the updated attached IAM role and successfully send logs and metrics to CloudWatch for monitoring and analysis. Created two dashboards, one for the Cloud Resume Challenge project and one for this website (Daily Learning Journal). These dashboards will provide visual insights into the performance and health of the monitored resources and applications, allowing for proactive monitoring and troubleshooting. Different alarms are configured to monitor specific metrics and trigger SNS notifications when certain thresholds are breached, enabling timely responses to potential issues and ensuring the reliability and availability of the monitored services.
I configured one of my Ansible playbooks that configures the AWS EC2 instance for this website to install and configure the AWS SSM + CloudWatch agents for enhanced monitoring and management capabilities.
I created and amended a few AWS CloudFormation templates today to automate the provisioning of AWS resources. This included defining the necessary resources, configurations, and dependencies in the CloudFormation templates to ensure efficient and consistent deployment of infrastructure. By using CloudFormation, I can easily manage and version control my infrastructure as code, allowing for repeatable and scalable deployments across different environments. This also ensures that my infrastructure is provisioned in a consistent and reliable manner, reducing the risk of configuration drift and improving overall operational efficiency.
See AWS for today's update.
I defined the AWS scope for my Terraform work: reusable templates to provision core infrastructure components and keep resource configuration consistent across environments. My focus today was on organizing AWS-specific infrastructure definitions so future deployments are predictable, repeatable, and easier to expand. This helps me strengthen my overall understanding of Terraform and its capabilities for infrastructure as code, while also improving the efficiency and reliability of my AWS resource provisioning processes.
I created a dedicated GitLab repository to manage Terraform as code, with version control and branch-based workflow for safe changes. This gives me a clean foundation for collaboration, change tracking, and CI/CD-ready IaC practices as the project grows. I also started defining reusable Terraform modules for core infrastructure components, such as EC2s, VPCs, and security groups, to ensure consistent resource provisioning across environments. This approach allows for more efficient management of infrastructure as code and promotes best practices in DevOps workflows.
Today, I provisioned a couple AWS EC2 instances using Terraform to strengthen my current understanding of IaC and automate the deployment of infrastructure.
I focused on implementing best practices for Infrastructure as Code (IaC) using Terraform, ensuring that our deployments are consistent, repeatable, and maintainable.
I worked on implementing Infrastructure as Code (IaC) using Terraform, allowing for automated and consistent provisioning of infrastructure resources. This promotes best practices, improving efficiency, and helps to reduce manual errors, ensuring that infrastructure is version-controlled and easily reproducible. Reviewed the Terraform documentation and latest releases to stay up-to-date with new features and best practices.
Continued reading the AWS Certified Solutions Architect Study Guide book to learn more about AWS architectural best practices. This book is a great resource for strengthening my overall AWS knowledge and preparing for the AWS Solutions Architect Associate certification exam.
Continued implementing best practices and refining my approach to Infrastructure as Code (IaC) using Terraform, ensuring that deployments are consistent, repeatable, and maintainable. Reviewed the Terraform documentation and latest releases to stay up-to-date with new features and best practices.
Today, I started writing a new Terraform module for provisioning an AWS EC2 instance. This module includes the necessary configurations for the instance, such as the AMI, instance type, security groups (with SSH access), subnets, internet gateway, and other essential resources. By creating this module, I strengthened my current understanding of Terraform and its capabilities for IaC and how it benefits workplace automation and efficiency.
No new updates for AWS today.
No new updates for DevOps today.
No new updates for Terraform today.
Today, I created a new key pair in AWS EC2 to provide secure SSH access to my recent Terraform EC2 instance. This involved generating a new key pair, downloading the private key, and configuring it for use with my EC2 instance. By doing this, I ensured that I have secure access to my instance while adhering to best practices for managing SSH keys in AWS.
No new updates for DevOps today.
Today, I worked on configuring security groups in Terraform to control inbound and outbound traffic for my EC2 instances. This involved defining rules for allowing SSH access, HTTP, and HTTPS traffic, ensuring that my instances are secure while still accessible for necessary operations. I also ensured I was able to successfully connect via SSH to the EC2 instance using the newly created key pair and that the security group rules were correctly applied to allow the connection. This process helped me strengthen my understanding of Terraform's capabilities for managing AWS resources and implementing security best practices.
Today, I continued strengthening my understanding of AWS resource provisioning using Terraform by creating, updating, and destroying various AWS resources. This hands-on experience allowed me to see the practical application of Terraform in managing AWS infrastructure and reinforced my knowledge of how to effectively use Terraform for infrastructure as code.
See Terraform for today's update.
I continued working on my Terraform module for provisioning AWS EC2 instances by adding configurations for Virtual Private Cloud (VPC), Elastic IP (EIP), and security groups. I also started writing output values to return critical information about the provisioned resources after the infrastructure deployment. I also started looking into Terraform modules listed on HashiCorp's Terraform Registry to explore reusable modules for AWS resources, which can help streamline my infrastructure provisioning and management processes.
Spent the day reviewing different AWS CloudFormation templates and documentation to both visualize and understand different approaches for different use cases. This helped me gain insights into best practices for structuring CloudFormation templates and how to effectively use them for infrastructure as code in AWS.
See AWS for today's update.
See AWS for today's update.
Today, I modified a few CloudFormation templates to include additional resources, modified current resource configurations, and added output values to return critical information about the provisioned resources after the infrastructure deployment. I also focused on drift detection and management in different stacks to ensure that deployed infrastructure remains consistent with the defined CloudFormation templates, which is crucial for maintaining the integrity and reliability of the infrastructure as code approach in AWS. This process helped me strengthen my understanding of CloudFormation and its capabilities for managing AWS resources effectively.
See AWS for today's update.
See AWS for today's update.
Today, I continued creating a list of desired conferences / events hosted by AWS. These events help me stay up-to-date with the latest developments in AWS services, best practices, and industry trends. Being able to attend these events would also strengthen my networking skills by connecting with other professionals in the field. I also started looking into AWS Summit events, which are free events held in various cities around the world, offering opportunities to learn about AWS services, hear from AWS experts, and connect with other AWS users. I am planning to attend an AWS Summit event in the near future to further enhance my knowledge and network within the AWS community.
See AWS for today's update.
See AWS for today's update.
No new updates for AWS today.
No new updates for DevOps today.
No new updates for Terraform today.