Widen Your Expertise with Infrastructure as Code: Short Tutorial

Amidst recent layoffs, having a strong foundation in Cloud Engineering and IaC can be invaluable skill for software engineers in today’s job market

Orkhan Huseynli
6 min readMay 10, 2023

--

With recent layoffs of thousands of tech professionals, the software engineering job market, especially for junior developers, has become quite competitive. Therefore, I would like to offer some friendly advice to my junior colleagues: widen your expertise in the development process overall.

As the supply of engineers currently outnumbers the demand in the short run, tech employers expect candidates to have a wider range of skills than what’s traditionally expected. For instance, having knowledge in DevOps or Cloud Engineering is an additional requirement on top of the already loaded expectations in Software Engineering. In this context, I would like to introduce you to Infrastructure as Code with Terraform, which can be useful not only for general coding tasks but also for interview processes where the interviewer may expect you to have experience in DevOps or Cloud Engineering.

Here is the link to GitHub repository: link

Prerequisites

If you have AWS account and read about IAM, have general understanding of policies, roles and permissions in AWS then this blog post will be easy for you to read.

It is recommended to go through my previous post Creating Users and Roles in AWS: 5 min read to understand snippets of code using test_user and test_user_role, which are supposed to be created on AWS console before running this tutorial.

Introduction

Although for this tutorial you can use your root user access without going into details of creating another user with limited access and privileges, I recommend you to set-up a separate user with limited access to AWS resources and services for the learning purposes.

We are going to create an Infrastructure as Code (IaC) solution for a simple scenario where a file created or inserted into an S3 Bucket triggers a Lambda function and logs related to that event are stored in the CloudWatch service.

Step 1: Set-up and Assume Role

AWS AssumeRole allows an IAM user to request temporary security credentials based on range of permissions specified for a specific role (in our case to test_user_role, which we have already created) to access AWS resources and services.

terraform {
required_providers {
#setting the AWS provider and version
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = "us-east-1"

assume_role {
# The role ARN within Account <your account number> to AssumeRole into.
role_arn = "arn:aws:iam::<your account number>:role/test_user_role"
}
}

Step 2: Create S3 bucket resource in AWS

resource "aws_s3_bucket" "tf_example_bucket" {
bucket = "tf_example_bucket"
}

If you run terraform plan and terraform apply in the terminal, then s3 bucket will be created in your AWS account. You can run this simple script to test your terraform setup. If everything is successful at this point, then we are good to go to the next step.

Step 3: Create Lambda function resource

Creating lambda resource is not straightforward as s3_bucket as it requires a few blocks of code for a reason. First lets look at a simplest setup.

resource "aws_lambda_function" "tf_example_notif_func" {
function_name = "tf_example_notif_func"
runtime = "nodejs18.x"
}

At this point we have defined the resource’s name and runtime, which is NodeJs 18. We are missing the actual code for our lambda. Hence, we must write down the actual code in a JS file and then assign it to our Lambda resource definition. Here is a sample of code from index.mjs (for the sake of this tutorial it must be located in the same directory as the terraform scripts) file which we are going to use:

export const handler = async (event)=>{
console.log("A new object was created in the S3 bucket")
return {"status": "ok"}
}

The file with actual js code is ready to be zipped and attached to our Lambda resource with the help of “archive_file” data source

data "archive_file" "index" {
type = "zip"
source_file = "index.mjs"
output_path = "index.zip"
}

resource "aws_lambda_function" "tf_example_notif_func" {
filename = "index.zip"
function_name = "tf_example_notif_func"
handler = "index.handler"
source_code_hash = data.archive_file.index.output_base64sha256
runtime = "nodejs18.x"
}

Though the provided infrastructure code initialises the resource, it is not ready yet for production. Our Lambda function needs to be associated with the execution role that can fetch temporary security credentials to access your aws services and resources.

Hence, the earlier code must be edited as follows:

/* define lambda function */
data "archive_file" "index" {
type = "zip"
source_file = "index.mjs"
output_path = "index.zip"
}

resource "aws_lambda_function" "tf_example_notif_func" {
filename = "index.zip"
function_name = "tf_example_notif_func"
role = aws_iam_role.tf_example_iam_assume_role.arn
handler = "index.handler"
source_code_hash = data.archive_file.index.output_base64sha256
runtime = "nodejs18.x"
tags = {
Name = "tf_example"
Environment = "test"
}
}

/* Assume role */

resource "aws_iam_role" "tf_example_iam_assume_role" {
name = "tf_example_iam_for_lambda_assume_role"
assume_role_policy = data.aws_iam_policy_document.tf_example_policy_document_assume_role.json
tags = {
Name = "tf_example"
Environment = "test"
}
}

data "aws_iam_policy_document" "tf_example_policy_document_assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}

As it can be seen from the code, aws_iam_role is created and associated with our aws_lambda_function.

Step 4: AWS Lambda function for S3 bucket event notification

First we must give a permission to our S3 bucket to invoke the Lambda function that we have created earlier.

/*  Give permission to S3 to invoke Lamda */

resource "aws_lambda_permission" "tf_example_lambda_permission" {
function_name = aws_lambda_function.tf_example_notif_func.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.tf_example.arn
action = "lambda:InvokeFunction"
}

Now we are ready to write down a s3 bucket notification resource.

resource "aws_s3_bucket_notification" "tf_example_notification" {
bucket = aws_s3_bucket.tf_example_bucket.id
lambda_function {
lambda_function_arn = aws_lambda_function.tf_example_notif_func.arn
events = ["s3:ObjectCreated:*"]
}

depends_on = [aws_lambda_permission.tf_example_lambda_permission]
}

Step 5: Configure AWS Lambda CloudWatch

Although this step is not mandatory for our demo solution, yet logging is an essential part of the development process. In this case, we want to quickly glimpse at logs to identify sanity of the whole process: putting an object to S3 bucket triggers lambda function.

Here is a short terraform code:

  1. Create a log group with the name corresponding the following structure /aws/lambda/<function name>.You can read more on that at AWS docs
resource "aws_cloudwatch_log_group" "function_log_group" {
name = "/aws/lambda/${aws_lambda_function.tf_example_notif_func.function_name}"
retention_in_days = 7
lifecycle {
prevent_destroy = false
}
}

2. Create a logging policy and attach it to an existing role defined for the notification lambda function.

resource aws_iam_policy function_logging_policy {
name = "function-logging-policy"
policy = data.aws_iam_policy_document.function_logging_policy_doc.json
}

data "aws_iam_policy_document" "function_logging_policy_doc" {
statement {
effect = "Allow"
actions = ["logs:CreateLogStream", "logs:PutLogEvents"]
resources = ["arn:aws:logs:*:*:*"]
}
}


resource "aws_iam_role_policy_attachment" "function_logging_policy_attachment" {
role = aws_iam_role.tf_example_iam_assume_role.id
policy_arn = aws_iam_policy.function_logging_policy.arn
}

Step 6: Build and Test

Lets build out solution running simple terraform commands

  1. Initialize the Terraform working directory that will set up and contain all required configuration files.
terraform init

2. The terraform plan command creates an execution plan that are expected to be applied to your infrastructure. By default, when Terraform creates a plan it, also validates the whole set-up.

In addition, it reads the current state of any already-existing remote objects, comparing the current configuration to the prior state and identifying any differences. As a result, it proposes a set of change actions.

terraform plan

3. Although terraform apply automatically creates a new execution plan as if you had run terraform plan, running those commands sequentially would be more clear option. As you may have already guessed, The terraform apply command executes the actions proposed in a Terraform plan.

terraform apply

Now you can go to you AWS console, find the newly created s3 bucket to upload a new file. Then go to AWS CloudWatch service and find your logs in log groups. You see the logs from the lambda function then your solution works as expected.

--

--