/ systems

DevOps Toolchain: Infrastructure as Code Part 2

Welcome back! I hope you enjoyed the first round of Terraform goodness. We're going to round it off today with some juicy AWS deployments and getting our Terraform state files up on S3.

Last time, we concluded by launching an EC2 instance with a Security Group into the cloud. w00t. Congrats. Ready to make it scale with advanced networking strategies?

The first thing we're going to do is flesh out our main.tf file. We're going to keep the same region, but use a different AMI.

resource "aws_launch_configuration" "testrun" {
  image_id        = "ami-40d28157"
  instance_type   = "t2.micro"
  security_groups = ["${aws_security_group.instance.id}"]

  lifecycle {
    create_before_destroy = true
  }
}

You can see the resource changed to "aws_launch_configuration" and our AMI variable changed to "image_id". This is important because it goes from a single instance to multiple instances.

The lifecycle variable is new, too. When making changes, you want to create new instances before destroying the old ones. Typically when you run terraform plan you'll see this warning that resources will be destroyed – This addition prevents that happening first.

Next, we're going to add the lifecycle to our security group for our instances:

resource "aws_security_group" "instance" {
  name = "terraform-example-instance"

  ingress {
    from_port   = "${var.server_port}"
    to_port     = "${var.server_port}"
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  lifecycle {
    create_before_destroy = true
  }
}

Same concept, you don't want to blow away the old security groups before the new instances are online.

resource "aws_autoscaling_group" "testrun" {
  launch_configuration = "${aws_launch_configuration.testrun.id}"
  availability_zones   = ["${data.aws_availability_zones.all.names}"]

  load_balancers    = ["${aws_elb.testrun.name}"]
  health_check_type = "ELB"

  min_size = 2
  max_size = 10

  tag {
    key                 = "Name"
    value               = "terraform-asg-testrun"
    propagate_at_launch = true
  }
}

resource "aws_elb" "testrun" {
  name               = "terraform-asg-testrun"
  availability_zones = ["${data.aws_availability_zones.all.names}"]
  security_groups    = ["${aws_security_group.elb.id}"]

  listener {
    lb_port           = 80
    lb_protocol       = "http"
    instance_port     = "${var.server_port}"
    instance_protocol = "http"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    interval            = 30
    target              = "HTTP:${var.server_port}/"
  }
}

Two new resources are added into the mix. "aws_autoscaling_group" and "aws_elb". The autoscaling group is to compensate for traffic demands. Rather than have an administrator or engineer fire off more servers, AWS will do it for you. The min_size and max_size refer to the number of instances at a given time. The minimum you'll have is 2, and the most you'll have regardless of traffic will be 10. The availability zones variable we'll create later, but it's for utilizing all availability zones in AWS (Terraform Up and Running, pg. 51).

Elastic Load Balancer is to distribute traffic evenly between the instances. You'll notice it uses the same data variable and has it's own security group we'll create later. The listener accepts http traffic on port 80 and forwards it to 8080 on the instances. As an exercise, you can create variables for the load balancer ports.

The health_check is a default option we'll utilize to make sure the instances are in a state to accept traffic. If an instance reports as OutofSerivce, the load balancer will not route traffic to that instance (which is where AutoScaling comes into play).

resource "aws_security_group" "elb" {
  name = "terraform-testrun-elb"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Here we have the elb security group. You c an see that it accepts traffic from port 80 and allows traffic output to anywhere. Is it all coming together, now?

Next, we're going to create our data variable:

data "aws_availability_zones" "all" {}

Finally, we'll change the output and add the new variables:

output "elb_dns_name" {
  value = "${aws_elb.testrun.dns_name}"
}

variable "server_port" {
  description = "The port the server will use for HTTP requests"
  default = 8080
}

Now, at this point you can launch your terraform plan and terraform apply commands and checkout your hard work. Be sure to add your key_pair back so you can ssh into your instances.

If you want to store your .tfstate file remotely, you can add an extra value in the main.tf file. You'll need an S3 bucket – You can create one with Terraform or manually... But what would be a Terraform tutorial without adding that bit as well?!

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tf-up-n-runnin"

  versioning {
    enabled = true
  }

  lifecycle {
    prevent_destroy = true
  }
}


Those of you that tangle with AWS regularly know that the bucket name has to be unique. Sorry Internet, but that bucket name that screams creative genius is mine and mine alone ;)

This code will create an S3 bucket. To include your terraform.tfstate files, add the following to main.tf and any other source files you want to use the remote state file:

terraform {
  backend "s3" {
    bucket  = "tf-up-n-runnin"
    region  = "us-east-1"
    key     = "terraform.tfstate"
    encrypt = true
  }
}

This will allow you to easily collaborate with others when working with the same infrastructure.

That last bit is to allow you to remotely push your configuration to the bucket.

Anytime you want to reference your terraform.tfstate just add that last block of code to any file and you'll use the same state configuration.

A build server is another option when working in teams with Terraform. It gives you a lot more flexibility (and it's safer) than just having remote state files. We're revisit this later. Next time we're going to take a gander at Jenkins and see all the awesome things it can do for us.

Thanks for reading!

DevOps Toolchain: Infrastructure as Code Part 2
Share this